Encode Backs Legal Challenge to OpenAI’s For-Profit Switch

FOR IMMEDIATE RELEASE: December 29, 2024

Contact: comms@encodeai.org

Encode Files Brief Supporting an Injunction to Block OpenAI’s For-Profit Conversion, Leading AI Researchers, including Nobel Laureate Geoffrey Hinton, Show Support

WASHINGTON, D.C. — Encode, a youth-led organization advocating for responsible artificial intelligence development, filed an amicus brief today in Musk v. Altman urging the U.S. District Court in Oakland to block OpenAI’s proposed restructuring into a for-profit entity. The organization argues that the restructuring would fundamentally undermine OpenAI’s commitment to prioritize public safety in developing advanced artificial intelligence systems.

The brief argues that the nonprofit-controlled structure that OpenAI currently operates under provides essential governance guardrails that would be forfeited if control were transferred to a for-profit entity. Instead of a commitment to exclusively prioritize humanity’s interests, OpenAI would be legally required to balance public benefit with investors’ interests.

“OpenAI was founded as an explicitly safety-focused non-profit and made a variety of safety related promises in its charter. It received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem,” said Emeritus Professor of Computer Science at University of Toronto Geoffrey Hinton, 2024 Nobel Laureate in Physics and 2018 Turing Award recipient. 

“The public has a profound interest in ensuring that transformative artificial intelligence is controlled by an organization that is legally bound to prioritize safety over profits,” said Nathan Calvin, Encode’s Vice President of State Affairs and General Counsel. “OpenAI was founded as a non-profit in order to protect that commitment, and the public interest requires they keep their word.” 

The brief details several safety mechanisms that would be significantly undermined by OpenAI’s proposed transfer of control to a for-profit entity. These include OpenAI’s current commitment to “stop competing [with] and start assisting” competitors if that is the best way to ensure advanced AI systems are safe and beneficial as well as the nonprofit board’s ability to take emergency actions in the public interest.

“Today, a handful of companies are racing to develop and deploy transformative AI, internalizing the profits but externalizing the consequences to all of humanity,” said Sneha Revanur, President and Founder of Encode. “The courts must intervene to ensure AI development serves the public interest.”

“The non-profit board is not just giving up an ownership interest in OpenAI; it is giving up the ability to prevent OpenAI from exposing humanity to existential risk,” said Stuart Russell, Distinguished Professor of Computer Science at UC Berkeley & Director of the Center for Human-Compatible AI. “In other words, it is giving up its own reason for existing in the first place. The idea that human existence should be decided only by investors’ profit-and-loss calculations is abhorrent.”

Encode argues that these protections are particularly necessary in light of OpenAI’s own stated mission, creating artificial general intelligence (AGI) — which the company itself has argued will fundamentally transform society, possibly within just a few years. Given the scope of impact AGI could have on society, Encode contends that it is impossible to set a price that would adequately compensate the nonprofit for its loss of control over how this transformation unfolds.

OpenAI’s proposed restructuring comes at a critical moment for AI governance. As policymakers and the public at large grapple with how to ensure AI systems remain aligned with the public interest, the brief argues that safeguarding nonprofit stewardship over this technology is too important to sacrifice — and merits immediate relief.

A hearing on the preliminary injunction is scheduled for January 14, 2025 before U.S. District Judge Yvonne Gonzalez Rogers.

Expert Availability: Rose Chan Loui, Founding Executive Director of UCLA Law’s Lowell Milken Center on Philanthropy and Nonprofits, has agreed  to be contacted to provide expert commentary on the legal and governance implications of the brief and OpenAI’s proposed conversion from nonprofit to for-profit status. chanloui@law.ucla.edu

About Encode: Encode is America’s leading youth voice advocating for bipartisan policies to support human-centered AI development and U.S. technological leadership. Encode has secured landmark victories in Congress, from establishing the first-ever AI safeguards in nuclear weapons systems to spearheading federal legislation against AI-enabled sexual exploitation. The organization was also a co-sponsor of California’s groundbreaking AI safety legislation, Senator Wiener’s SB 1047, which required the largest AI companies to take additional steps to protect against catastrophic risks from advanced AI systems. Working with lawmakers, industry leaders, and national security experts, Encode champions policies that maintain American dominance in artificial intelligence while safeguarding national security and individual liberties.

Encode-Backed AI/Nuclear Guardrails Signed Into Law


U.S. Sets Historic AI Policy for Nuclear Weapons in FY2025 NDAA, Ensuring Human Control

WASHINGTON, D.C. – Amid growing concerns about the role of automated systems in nuclear weapons, the U.S. has established its first policy governing the use of artificial intelligence (AI) in nuclear command, control and communications. Signed into law as part of the FY2025 NDAA, this historic measure ensures that AI will strengthen, rather than compromise, human decision-making in our nuclear command structure.

The policy allows AI to be integrated in early warning capabilities and strategic communications while maintaining human judgment over critical decisions like the employment of nuclear weapons, ensuring that final authorization for such consequential actions remains firmly under human control.

Through extensive engagement with Congress, Encode helped develop key aspects of the provision, Section 1638. Working with Senate and House Armed Services Committee offices, Encode led a coalition of experts including former defense officials, AI safety researchers, arms control experts, former National Security Council staff, and prominent civil society organizations to successfully advocate for this vital provision.

“Until today, there were zero laws governing AI use in nuclear weapons systems,” said Sunny Gandhi, Vice President of Political Affairs at Encode. “This policy marks a turning point in how the U.S. integrates AI into our nation’s most strategic asset.”

The bipartisan-passed measure emerged through close collaboration with congressional champions including Senator Ed Markey, Congressman Ted Lieu, and Congresswoman Sara Jacobs, establishing America’s first legislative action on AI’s role in nuclear weapons systems.

About Encode: Encode is a leading voice in responsible AI development and national security, advancing policies that promote American technological leadership while ensuring appropriate safeguards. The organization played a key role in developing California’s SB 1047, landmark state legislation aimed at reducing catastrophic risks from advanced AI systems. It works extensively with defense and intelligence community stakeholders to strengthen U.S. capabilities while mitigating risks.

Encode & ARI Coalition Letter: The DEFIANCE and TAKE IT DOWN Acts

FOR IMMEDIATE RELEASE: Dec 5, 2024

Contact: adam@encodeai.org

Tech Policy Leaders Launch Major Push for AI Deepfake Legislation

Major Initiative Unites Child Safety Advocates, Tech Experts Behind Senate-Passed Bills

WASHINGTON, D.C.Encode and Americans for Responsible Innovation (ARI) today led a coalition of over 30 organizations calling on House leadership to advance crucial legislation addressing non-consensual AI-generated deepfakes. As first reported by Axios, the joint letter urges immediate passage of two bipartisan bills: the DEFIANCE Act and TAKE IT DOWN Act, both of which have cleared the Senate with strong support.

“This unprecedented coalition demonstrates the urgency of addressing deepfake nudes before they become an unstoppable crisis,” said Encode VP of Public Policy Adam Billen. “AI-generated  nudes are flooding our schools and communities, robbing our children of the safe upbringing they deserve. The DEFIANCE and TAKE IT DOWN Acts are a rare, bipartisan opportunity for Congress to get ahead of a technological challenge before it’s too late.”

The coalition spans leading victim support organizations such as the Sexual Violence Prevention Association, RAINN, and Raven, major technology policy organizations like the Software Information and Industry Association and the Center for AI and Digital Policy, and prominent advocacy groups including the American Principles Project, Common Sense Media and Public Citizen.

The legislation targets a growing digital threat: AI-generated non-consensual intimate imagery. Under the DEFIANCE Act, survivors gain the right to pursue civil action against perpetrators, while the TAKE IT DOWN Act introduces criminal consequences and mandates platform accountability through required content removal systems. Following the DEFIANCE Act’s Senate passage this summer, the TAKE IT DOWN Act secured Senate approval in recent days.

The joint campaign – coordinated by Encode and ARI – marks an unprecedented alignment between children’s safety advocates, anti-exploitation experts, and technology policy specialists. Building on this momentum, both organizations unveiled StopAIFakes.com Wednesday, launching a grassroots petition drive to demonstrate public demand for legislative action.

About Encode: Encode is a youth-led organization advocating for safe and responsible artificial intelligence. 

Media Contact:

Adam Billen

VP, Political Affairs

Contact: comms@encodeai.org

Petition Urging House to Stop Non-Consensual Deepfakes

FOR IMMEDIATE RELEASE: December 4, 2024

Contact: comms@encodeai.org

Petitions support the DEFIANCE Act and TAKE IT DOWN Act

WASHINGTON, D.C. – On Wednesday, Americans for Responsible Innovation and Encode announced a new petition campaign, urging the House of Representatives to pass protections against AI-generated non-consensual intimate images (NCII) and revenge porn before the end of the year. The campaign, which is expected to gather thousands of signatures over the course of the next week, supports passage of the TAKE IT DOWN ACT and the DEFIANCE Act. Petitions are being gathered at StopAIFakes.com.

The TAKE IT DOWN Act, introduced by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN), criminalizes the publication of non-consensual, sexually exploitative images — including AI-generated deepfakes — and requires online platforms to have in place notice and takedown processes. The DEFIANCE Act was introduced by Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) in the Senate and Rep. Alexandria Ocasio-Cortez (D-NY) in the House. The bill empowers survivors of AI NCII — including minors and their families — to take legal action by suing their perpetrators. Both bills have passed the Senate.

“We can’t let Congress miss the window for action on AI deepfakes like they missed the boat on social media,” said ARI President Brad Carson. “Children are being exploited and harassed by AI deepfakes, and that causes a lifetime of harm. The DEFIANCE Act and the TAKE IT DOWN Act are two easy, bipartisan solutions that Congress can get across the finish line this year. Lawmakers can’t be allowed to sit on the sidelines while kids are getting hurt.”

“Deepfake porn is becoming a pervasive part of our schools and communities, robbing our children of the safe upbringing they deserve,” said Encode Vice President of Public Policy Adam Billen. “We owe them a safe childhood free from fear and exploitation. The TAKE IT DOWN and DEFIANCE Acts are Congress’ chance to create that future.”

###

About Encode Justice: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode Justice fights to steer AI development in a direction that benefits society.

Encode Urges Immediate Action Following Tragic Death of Florida Teen Linked to AI Chatbot Service

FOR IMMEDIATE RELEASE: Oct. 24, 2024

Contact: cecilia@encodeai.org

Youth-led organization demands stronger safety measures for AI platforms that emotionally target young users.

WASHINGTON, D.C.Encode expresses profound grief and concern regarding the death of Sewell Setzer III, a fourteen-year-old student from Orlando, Florida. According to a lawsuit filed by his mother, Megan Garcia, a Character.AI chatbot encouraged Setzer’s suicidal ideation in the days and moments leading up to his suicide. The lawsuit alleges that the design, marketing, and function of Character.AI’s product led directly to his death.

The 93-page complaint, filed with the District Court of Orlando, names both Character.AI and Google as defendants. The lawsuit details how platforms failed to adequately respond to messages indicating self-harm and documents “abusive and sexual interactions” between the AI chatbot and Setzer. Character.AI now claims to have strengthened protections on their platform against platform promoting self-harm, but recent reporting shows that it still hosts chatbots with thousands or millions of users explicitly marketed as “suicide prevention experts” that fail to point users towards professional support.

“It shouldn’t take a teen to die for AI companies to enforce basic user protections,” said Adam Billen, VP of Public Policy at Encode. “With 60% of Character.AI users being below the age of 24, the platform has a responsibility to prioritize user wellbeing and safety beyond simple disclaimers.”

The lawsuit alleges that the defendants “designed their product with dark patterns and deployed a powerful LLM to manipulate Sewell – and millions of other young customers – into conflating reality and fiction.”

Encode emphasizes that AI chatbots cannot substitute for professional mental health treatment and support. The organization calls for:

  • Enhanced transparency in systems that target young users.
  • Prioritization of user safety in emotional chatbot systems.
  • Immediate investment into prevention mechanisms.

We extend our deepest condolences to Sewell Setzer III’s family and friends, and join the growing coalition of voices that are demanding increased accountability in this tragic incident.

About Encode: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode fights to steer AI development in a direction that benefits society.

Media Contact:

Cecilia Marrinan

Deputy Communications Director, Encode

cecilia@encodeai.org