Industry associations and groups that ideologically oppose all regulation are likely to make the following misleading arguments against the RAISE Act (A6453/S6953). Many of these arguments are misinformed and some are simply distortions of fact. Here’s the reality behind their claims:
Misleading Claim: RAISE will burden startups.
Reality: RAISE only applies to very large companies: developers spending at least $100 million annually on computational resources to train AI models. Small startups are completely exempt. In fact, startups benefit from having more confidence that the models they are using and building products on are safe.
Misleading Claim: RAISE requires developers to anticipate every possible misuse of their systems, which is unrealistic.
Reality: RAISE only requires developers to report on how they are testing for a very narrowly defined set of highly severe risks, each of which could cause more than $1B in damages or more than 100 injuries or deaths in a single incident. Risk assessments occur in many industries – including cars, drugs, and airplanes – and never comprehensively identify all possible risks, which is impossible.
Misleading Claim: RAISE will shut down open source AI.
Reality: RAISE contains no requirements that are incompatible with open source AI. Its cybersecurity requirements only apply to models within the developer’s control and there are no provisions about responsibility for “derivative models” that have raised concern in other bills.
Misleading Claim: RAISE will cause AI developers to leave New York.
Reality: We’ve heard industry groups say this time and time again, but the truth is that companies keep working in New York because of the size of the market and the access to talent that it provides. Opponents made the same “exodus” argument about the Safe for Kids Act, the Health Information Privacy Act, and the Digital Fair Repair Act, but it has not happened. This case is even more clear cut because the requirements in RAISE are largely consistent with voluntary commitments that companies have already made.
Misleading Claim: The bill can wait until next year.
Reality: Acting next year could be too late. Some experts have said they think that it is possible that AI could very well start causing the severe harms covered by RAISE this year. In March, a group of AI experts convened by California Governor Gavin Newsom concluded that “policy windows do not remain open indefinitely” and that the stakes for inaction at this moment could be “very high.” Exact timing of risks remains uncertain, but New York should be prepared. See more here.
Misleading Claim: Regulations should target people misusing AI, not AI developers.
Reality: The two approaches are complementary, not contradictory. Just as it makes sense to regulate both the production and use of potentially dangerous products like cars or chemicals, it makes sense to regulate both the development and use of AI. If somebody uses AI to create a bioweapon, of course that person should be criminally prosecuted; but by then, it may be too late to prevent severe harms. AI developers should bear some responsibility for testing their products and putting reasonable safeguards in place, especially for very large-scale harms.
Misleading Claim: Companies integrating AI into their software should have responsibilities, not the developer of the underlying model they are using. What RAISE is doing is like regulating electricity or motors instead of cars.
Reality: Risks addressed by RAISE are a direct result of actions of developers. Foundation models can, entirely on their own, generate malicious computer code or provide assistance in developing a biological weapon. In its February safety review, the research organization OpenAl found their own models “are on the cusp of being able to meaningfully help novices create known biological threats… We expect current trends of rapidly increasing capability to continue, and for models to cross this threshold in the near future.” If regulation exempts original foundation model developers, it ignores a key risk factor.
The analogy with electricity is thus very misleading. Electricity and motors primarily pose risks far downstream of their use, in form that doesn’t resemble their initial design. This is not true of risks from foundation models that RAISE addresses.
Misleading Claim: RAISE is just like SB 1047 in California.
Reality: RAISE is a different bill that learns from SB 1047 and takes into account the main complaints that industry made about SB 1047. It creates no new regulatory bodies, does not include developer liability for misuse of derivative models, and applies only to AI companies spending more than $100M on computers for AI development–not cloud providers, datacenters, or smaller developers that modify existing frontier models.
Misleading Claim: Congress will pass something in this area, so New York shouldn’t act.
Reality: Congress has been unable to pass major technology regulation in over 20 years, with the possible exception of the TikTok ban, which the federal government is not currently enforcing. Congress has still not passed data privacy regulation despite bipartisan support. Congress is unlikely to act on this issue, making it necessary for states to lead just as they have in consumer protection, public health, and many other areas.
Misleading Claim: The risks addressed by RAISE are too speculative to do anything about.
Reality: Both the International AI Safety Report has acknowledged the risks addressed by RAISE, and the Joint California Policy Working Group report said there was “growing evidence” for them. OpenAI recently warned that “our models are on the cusp of being able to meaningfully help novices create known biological threats.” With experts warning these risks could materialize very soon, it’s important to act now before it’s too late. Are opponents really advocating that the world needs to experience a catastrophe before acting to prevent it?
Misleading Claim: The RAISE Act is too vague to comply with.
Reality: AI is evolving rapidly, so hyper-specific technical rules would get out of date quickly. They would also be opposed, with good reason, by industry. Instead, RAISE uses common legal standards to incorporate reasonable flexibility that can keep up with evolving technology and give developers choice in technical measures. Standards like “unreasonable risk” are well understood in existing law and have been applied in many other domains.
Misleading Claim: RAISE will stifle innovation in the US and advantage China.
Reality: Safety standards go hand in hand with innovation by preventing missteps by one irresponsible player from disrupting the entire AI industry. Requiring American cars to have seatbelts doesn’t disadvantage us in comparison to other nations, and the point of seatbelts is to enable speed and safety at the same time. It’s no different with AI.
The Bottom Line
The RAISE Act represents a balanced, targeted approach to ensuring AI safety without hindering innovation. The opposition’s arguments rely on hypothetical scenarios and outdated regulatory philosophies that fail to account for the unique challenges presented by advanced AI systems. New York has both the opportunity and responsibility to lead on this critical issue.