New open letter raises the pressure on OpenAI to answer key questions about risks to its nonprofit mission

Geoffrey Hinton, Lawrence Lessig, Vitalik Buterin, Stephen Fry, and many others joined the letter, which calls for much greater transparency as the AI developer attempts a controversial corporate restructuring.

WASHINGTON – August 4, 2025 – More than 100 prominent AI experts, former OpenAI team members, public figures, and civil society groups signed an open letter published Monday calling for greater transparency from OpenAI.

The letter, embargoed until its release at 6:00 A.M. EDT Monday, August 4th, and jointly organized by the lead authors of Not For Private Gain, The Midas Project, and the newly-named EyesOnOpenAI Coalition (a group of California nonprofit, philanthropic and labor organizations), focuses on OpenAI’s ongoing efforts to engage in a corporate restructuring, which threatens to weaken its nonprofit mission. 

OpenAI previously faced backlash over plans to disempower the nonprofit that currently controls it. While OpenAI now claims the nonprofit will remain in control of its for-profit subsidiary, others point out that the company’s plans are opaque and may completely undermine its original mission.

Concerns about the control of OpenAI and its technology are mounting as it prepares to release GPT-5 — expected to be the most powerful AI model ever released, and which many anticipate will arrive as early as this week.

In the new open letter, signatories including Geoffrey Hinton, Lawrence Lessig, Vitalik Buterin, Stephen Fry, Audrey Tang, dozens of nonprofits, 50+ professors, and nine former OpenAI team members called on OpenAI to answer seven key questions about its plans to protect the nonprofit mission. These questions demand answers about:

  1. OpenAI’s legal duty to prioritize its charitable mission
  2. Whether the nonprofit will be disempowered
  3. Nonprofit board members’ potential financial conflicts of interest
  4. OpenAI investors’ profit caps
  5. Commercializing AGI
  6. OpenAI’s commitment to its charter
  7. The nonprofit’s operating agreement with its for-profit subsidiary 

Read the open letter.

Without transparency on these key issues, it will be impossible for the public to assess whether OpenAI is living up to its legal obligations. 

This is particularly important in light of recent reporting indicating that OpenAI is renegotiating the terms of its deal with Microsoft regarding the use of AGI and related technology. The OpenAI nonprofit would, under the current deal, control the fate of AGI, an advanced and highly valuable form of AI. Giving control of AGI over to for-profit actors would be a betrayal of OpenAI’s charitable mission.

OpenAI was founded in 2015 as a nonprofit with the mission of ensuring that artificial general intelligence benefits all of humanity. That makes everyone the legal beneficiaries of its mission, and it gives everyone a stake in protecting the integrity of its charitable mission.

“OpenAI is playing a shell game with the keys to humanity’s future,” said Nathan Calvin, Vice President of State Affairs and General Counsel at Encode AI, and a lead author of Not For Private Gain. “The public deserves much more transparency from an organization that claims to be operating in humanity’s best interest.”

“We’re trying to take OpenAI’s leadership at their word when they say they want to benefit humanity,” said Tyler Johnston, Executive Director of The Midas Project. “Now it’s time for them to show the receipts to make sure their mission is being fulfilled.”

“OpenAI was entrusted with a powerful public mission and billions in charitable assets — but it’s operating behind closed doors,” said Orson Aguilar, Co-Chair of the EyesOnOpenAI coalition. “If they truly believe AI will shape the future of humanity, then the public deserves a seat at the table. Anything less undermines their nonprofit promise.”

The letter has been signed by Nobel Prize winners, prominent public intellectuals, former OpenAI employees, machine learning experts, and a wide range of civil society groups. Notable signatories include Geoffrey Hinton, Sir Oliver Hart, Vitalik Buterin, Sir Stephen Fry, Audrey Tang, Lawrence Lessig, Stuart Russell, Gary Marcus, Helen Toner, the San Francisco Foundation, and the EyesOnOpenAI Coalition, among many others.

Encode AI is a youth-led advocacy organization that fights for a future where AI can fulfill its transformative potential while being developed responsibly. Through policy advocacy and public education, it works to steer the future of AI technology in a positive direction.

The Midas Project is a watchdog nonprofit working to ensure that AI technology benefits everybody, not just the companies developing it. It leads strategic initiatives to monitor tech companies, promote transparency in AI development, discourage corner-cutting, and advocate for the responsible development of emerging technologies.

EyesOnOpenAI is a coalition of more than 60 philanthropic, labor, and nonprofit organizations that have spoken up about the restructuring in letters to California Attorney General Rob Bonta, associated with the San Francisco Foundation and co-chaired by Orson Aguilar of LatinoProsperity.

Encode AI Applauds Newly Announced Amendments to SB 53

FOR IMMEDIATE RELEASE

July 9, 2025

Contact: Carrie Hutcheson, chutcheson@glenechogroup.com

Encode AI Applauds Newly Announced Amendments to Senate Bill (SB) 53 That Will Advance Responsible AI Development in California

SB 53 Amendments Would Codify Recommendations from the Joint California Policy Working Group on AI Frontier Models to Promote Industry-Wide Transparency 

Sacramento, CA ― Today, Senator Scott Wiener (D-San Francisco) announced amendments to expand Senate Bill (SB) 53, landmark AI legislation that will enable California to lead with a “trust-but-verify” approach to frontier AI governance. SB 53 is co-sponsored by Encode AI, Economic Security Action California and the Secure AI Project.

The new provisions draw on the recommendations of experts within the Joint California Policy Working Group on AI Frontier Models to establish a transparency requirement for the largest AI companies.

Importantly, SB 53 codifies voluntary commitments previously made by the largest AI developers ― including Meta, Google, OpenAI, and Anthropic ― to level the playing field and increase industry-wide accountability.

“California has long been the birthplace of major tech innovations. SB 53 will help keep it that way by ensuring AI developers responsibly build frontier AI models,” said Sneha Revanur, President and Founder of Encode AI. “This bill reflects a common-sense consensus on AI development, promoting transparency around companies’ safety and security practices.” 

The bill’s provisions apply only to a small number of well-resourced companies who are training the most advanced and powerful AI models. They will require these companies to publish important information regarding their safety protocols and risk evaluations, and report major safety incidents to the Attorney General. Additionally, the bill includes protections for whistleblowers within AI labs who present evidence of harm or other violations.

“We’re thrilled to co-sponsor legislation that builds off the Working Group’s insights and codifies the voluntary commitments made by major tech companies,” said Nathan Calvin, VP of State Affairs and General Counsel at Encode AI. “SB 53 demonstrates that Californians do not need to choose between AI innovation and safety.”

Significantly, the announcement regarding California’s SB 53 comes after the demise of the AI moratorium in the Senate Reconciliation Bill, which would have blocked state-level AI legislation for the next decade. Encode was instrumental in rallying opposition to the AI moratorium, leading five different open letters over a month and a half that collectively represented the views of 150+ organizations and 850+ concerned parents. 

SB 53 is supported by a broad coalition of researchers, industry leaders, and civil society advocates:

“At Elicit, we build AI systems that help researchers make evidence-based decisions by analyzing thousands of academic papers,” said Andreas Stuhlmüller, CEO of Elicit. “This work has taught me that transparency is essential for AI systems that people rely on for critical decisions. SB53’s requirements for safety protocols and transparency reports are exactly what we need as AI becomes more powerful and widespread. As someone who’s spent years thinking about how AI can augment human reasoning, I believe this legislation will accelerate responsible innovation by creating clear standards that make future technology more trustworthy.”

“I have devoted my life to advancing the field of AI, but in recent years it has become clear that the risks it poses could threaten us all,” said Geoffrey Hinton, University of Toronto Professor Emeritus, Turing Award winner, Nobel laureate, and a “godfather of AI.” “Greater transparency requirements into how companies are addressing safety concerns from the most powerful technology of our time is an important step towards addressing those risks.”

“SB 53 is a smart, targeted step forward on AI safety, security, and transparency,” said Bruce Reed, Head of AI at Common Sense Media. “We thank Senator Wiener for reinforcing California’s strong commitment to innovation and accountability.”

“AI can bring tremendous benefits, but only if we steer it wisely. Recent evidence shows that frontier AI systems can resort to deceptive behavior like blackmail and cheating to avoid being shut down or fulfill other objectives,” said Yoshua Bengio, Full Professor at Université de Montréal, Co-President and Scientific Director of LawZero, Turing Award winner and a “godfather of AI.” “These risks must be taken with the utmost seriousness alongside other existing and emerging threats. By advancing SB 53, California is uniquely positioned to continue supporting cutting-edge AI while proactively taking a step towards addressing these severe and potentially irreversible harms.” 

“Including safety and transparency protections recommended by Gov. Newsom’s AI commission in SB 53 is an opportunity for California to be on the right side of history and advance commonsense AI regulations while our national leaders dither,” said Teri Olle, Director of Economic Security California Action, a co-sponsor of the bill. “In addition to making sure AI is safe, the bill would create a public option for cloud computing – the critical infrastructure necessary to fuel innovation and research. CalCompute would democratize access to this powerful resource that is currently enjoyed by a tiny handful of wealthy tech companies, and ensure that AI benefits the public. With inaction from the federal government – and on the heels of the defeat of the proposed 10-year moratorium on AI regulations – California should act now and get this done.”

“The California Report on Frontier AI Policy underscored the growing consensus for the importance of transparency into the safety practices of the largest AI developers,” said Thomas Woodside, Co-Founder and Senior Policy Advisor, Secure AI Project, a co-sponsor of the bill. “SB 53 ensures exactly that: visibility into how AI developers are keeping their AI systems secure and Californians safe.”

“Reasonable people can disagree about many aspects of AI policy, but one thing is clear: reporting requirements and whistleblower protections like those in SB 53 are sensible steps to provide transparency, inform the public, and deter egregious practices without interfering with innovation,” said Steve Newman, Technical co-founder of eight technology startups, including Writely – which became Google Docs, and co-creator of Spectre, one of the most influential video games of the 1990s.

###

Encode AI, Common Sense Media, Fairplay, Young People’s Alliance Lead 140+ Kids Safety Organizations in Opposition to the 10-Year Moratorium on the Enforcement of State AI Laws

Washington DC, June 28, 2025 — Encode AI, along with Common Sense Media, Fairplay, and the Young People’s Alliance, led a coalition of 140+ advocacy organizations in calling on the Senate to pull a ban on the enforcement of state-level AI legislation for the next decade.

“We write to urge you to oppose the provision in the House Energy and Commerce Committee’s Budget Reconciliation text that would put a moratorium on the enforcement of state artificial intelligence (AI) legislation for the next ten years,” wrote the coalition. “By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control. As organizations working on the frontline of the consequences of AI development with no guardrails, we know what this would mean for our children.”

“As written, the provision is so broad it would block states from enacting any AI-related legislation, including bills addressing hyper-sexualized AI companions, social media recommendation algorithms, protections for whistleblowers, and more,” the coalition continued. “It ties lawmakers’ hands for a decade, sidelining policymakers and leaving families on their own as they face risks and harms that emerge with this fast-evolving technology in the years to come.”

Today, Senator Blackburn shared the letter stating “Just last year, millions of high school students said they knew a classmate who had been victimized by AI-generated image based sexual abuse. This is why countless organizations are opposing misguided efforts to block state laws on AI. We must stand with them.”

“For over a decade, victims and the public have relied on state governments for what little protection they have against fast-moving technologies like social media—and now AI,” said Adam Billen, Vice President of Public Policy at Encode AI. “Big Tech knows it can stall legislation in Congress, so now it wants to strip states of the power to enforce current and future laws that safeguard the public from AI-driven harms.

The provision was slightly modified this week after the parliamentarian went back on her original ruling on the provision, forcing the authors to make it clearer that it applies only to the new $500M in BEAD funding. Still, a plain text analysis shows that states that take any of the new $500M would be putting their full portion of the existing $42.5B in BEAD funding at risk. The provision continues to put an incredibly wide range of basic protections at risk, opens up small states to lawsuits they can’t afford to fend against, undermines the basic tenants of federalism, and would incentivize states to adopt broad private rights of action as the enforcement mechanism in every AI bill going forward.

The full letter is available here.

The RAISE Act: Myth vs. Facts

Industry associations and groups that ideologically oppose all regulation are likely to make the following misleading arguments against the RAISE Act (A6453/S6953). Many of these arguments are misinformed and some are simply distortions of fact. Here’s the reality behind their claims:


Misleading Claim: RAISE will burden startups.

Reality: RAISE only applies to very large companies: developers spending at least $100 million on computational resources to train frontier AI models. Small startups are completely exempt. In fact, startups benefit from having more confidence that the models they are using and building products on are safe.

Misleading Claim: RAISE requires developers to anticipate every possible misuse of their systems, which is unrealistic.

Reality: RAISE only requires developers to report on how they are testing for a very narrowly defined set of highly severe risks, each of which could cause more than $1B in damages or more than 100 injuries or deaths in a single incident. Risk assessments occur in many industries – including cars, drugs, and airplanes – and never comprehensively identify all possible risks, which is impossible.

Misleading Claim: RAISE will shut down open source AI.

Reality: RAISE contains no requirements that are incompatible with open source AI. Its cybersecurity requirements only apply to models within the developer’s control and there are no provisions about responsibility for “derivative models” that have raised concern in other bills.

Misleading Claim: RAISE will cause AI developers to leave New York.

Reality: We’ve heard industry groups say this time and time again, but the truth is that companies keep working in New York because of the size of the market and the access to talent that it provides. Opponents made the same “exodus” argument about the Safe for Kids Act, the Health Information Privacy Act, and the Digital Fair Repair Act, but it has not happened. This case is even more clear cut because the requirements in RAISE are largely consistent with voluntary commitments that companies have already made.

Misleading Claim: The bill can wait until next year.

Reality: Acting next year could be too late. Some experts have said they think that it is possible that AI could very well start causing the severe harms covered by RAISE this year. In March, a group of AI experts convened by California Governor Gavin Newsom concluded that “policy windows do not remain open indefinitely” and that the stakes for inaction at this moment could be “very high.” Exact timing of risks remains uncertain, but New York should be prepared. See more here.

Misleading Claim: Regulations should target people misusing AI, not AI developers.

Reality: The two approaches are complementary, not contradictory. Just as it makes sense to regulate both the production and use of potentially dangerous products like cars or chemicals, it makes sense to regulate both the development and use of AI. If somebody uses AI to create a bioweapon, of course that person should be criminally prosecuted; but by then, it may be too late to prevent severe harms. AI developers should bear some responsibility for testing their products and putting reasonable safeguards in place, especially for very large-scale harms.

Misleading Claim: Companies integrating AI into their software should have responsibilities, not the developer of the underlying model they are using. What RAISE is doing is like regulating electricity or motors instead of cars.

Reality: Risks addressed by RAISE are a direct result of actions of developers. Foundation models can, entirely on their own, generate malicious computer code or provide assistance in developing a biological weapon. In its February safety review, the research organization OpenAl found their own models “are on the cusp of being able to meaningfully help novices create known biological threats… We expect current trends of rapidly increasing capability to continue, and for models to cross this threshold in the near future.” If regulation exempts original foundation model developers, it ignores a key risk factor. 

The analogy with electricity is thus very misleading. Electricity and motors primarily pose risks far downstream of their use, in form that doesn’t resemble their initial design. This is not true of risks from foundation models that RAISE addresses. 

Misleading Claim: RAISE is just like SB 1047 in California.

Reality: RAISE is a different bill that learns from SB 1047 and takes into account the main complaints that industry made about SB 1047. It creates no new regulatory bodies, does not include developer liability for misuse of derivative models, and applies only to AI companies spending more than $100M on computers for AI development–not cloud providers, datacenters, or smaller developers that modify existing frontier models.

Misleading Claim: Congress will pass something in this area, so New York shouldn’t act.

Reality: Congress has been unable to pass major technology regulation in over 20 years, with the possible exception of the TikTok ban, which the federal government is not currently enforcing. Congress has still not passed data privacy regulation despite bipartisan support. Congress is unlikely to act on this issue, making it necessary for states to lead just as they have in consumer protection, public health, and many other areas.

Misleading Claim: The risks addressed by RAISE are too speculative to do anything about.

Reality: Both the International AI Safety Report has acknowledged the risks addressed by RAISE, and the Joint California Policy Working Group report said there was “growing evidence” for them. OpenAI recently warned that “our models are on the cusp of being able to meaningfully help novices create known biological threats.” With experts warning these risks could materialize very soon, it’s important to act now before it’s too late. Are opponents really advocating that the world needs to experience a catastrophe before acting to prevent it?

Misleading Claim: The RAISE Act is too vague to comply with.

Reality: AI is evolving rapidly, so hyper-specific technical rules would get out of date quickly. They would also be opposed, with good reason, by industry. Instead, RAISE uses common legal standards to incorporate reasonable flexibility that can keep up with evolving technology and give developers choice in technical measures. Standards like “unreasonable risk” are well understood in existing law and have been applied in many other domains.

Misleading Claim: RAISE will stifle innovation in the US and advantage China.

Reality: Safety standards go hand in hand with innovation by preventing missteps by one irresponsible player from disrupting the entire AI industry. Requiring American cars to have seatbelts doesn’t disadvantage us in comparison to other nations, and the point of seatbelts is to enable speed and safety at the same time. It’s no different with AI.  

The Bottom Line

The RAISE Act represents a balanced, targeted approach to ensuring AI safety without hindering innovation. The opposition’s arguments rely on hypothetical scenarios and outdated regulatory philosophies that fail to account for the unique challenges presented by advanced AI systems. New York has both the opportunity and responsibility to lead on this critical issue.

Encode Applauds State Lawmakers’ Letter in Opposition to AI Moratorium

Contact: comms@encodeai.org

Encode applauds the release of a letter from 260 lawmakers across every US state opposing the AI moratorium included in the House-passed budget reconciliation bill. The letter, led by Representative Guffey (R) of South Carolina and Senator Larson (D) of South Dakota, urges Congress to reject any provision that would preempt state and local AI regulation in this year’s reconciliation package.

The lawmakers point to risks posed by AI to kids safety online, vulnerable seniors targeted by scams, patients in our healthcare system, and more. These are all issues that Congress has failed to act on, but state legislators have stepped up — passing vital protections that shield vulnerable communities from real-world harm.

If those laws are overridden by Congress, the consequences would be dire. Americans in every state would lose protections they rely on, with no federal safeguards to take their place.

“By wiping out state AI legislation, Congress would leave millions of Americans vulnerable to harms that are already happening—from deepfake pornography to AI-driven fraud,” said Adam Billen, Vice President of Public Policy at Encode. “Just a few years ago we could not have imagined deepfake porn flooding our communities or AI-driven scams exploiting vulnerable seniors, but without state action victims everywhere would still be vulnerable. We must protect the ability of our state legislators to respond to the rapidly emerging risks from AI.”

###

The TAKE IT DOWN Act Clears Congress

Today, April 28th, 2025, the U.S. House of Representatives voted to pass the TAKE IT DOWN Act 409-2. Having already passed the Senate in February, the bill now heads to the President’s desk for signature. With bipartisan support and endorsements from over 120 civil society organizations, companies, trade groups, and unions the bill is primed to become law.

“Just a few years ago it was unthinkable that everyday people would have the technical expertise to generate realistic intimate deepfake content. But today anyone can open an app or website and create realistic nude images of anyone—an ex, a classmate, a coworker—in minutes. The resulting wave of abuse, compounded with the existing crisis of image-based sexual exploitation, has already robbed thousands of victims of their personhood and sense of self,” said Adam Billen, Vice President of Public Policy at Encode. “Congress has taken a critical step in addressing AI harms by passing the TAKE IT DOWN Act, which will not only hold perpetrators accountable but empower millions of victims to reclaim control of their images in the aftermath of abuse.”

Encode organized multiple coalition letters in support of the bill, published op-eds in Tech Policy Press and the Seattle Times, convened victims, students, and lawmakers including sponsors Senators Cruz and Klobuchar to prevent deepfake porn in schools, and built a website tracking deepfake porn incidents in schools around the country. This morning, Encode’s Adam spoke at a virtual press conference addressing the legislation’s constitutional grounding.