AICalifornia, New York Could Become First States to Enact Laws Aiming to Prevent Catastrophic AI Harm
California and New York could become the first states to establish rules aiming to prevent the most advanced, large-scale artificial intelligence models —known as frontier AI models —from causing catastrophic harm involving dozens of casualties or billion-dollar damages.
California and New York could become the first states to establish rules aiming to prevent the most advanced, large-scale artificial intelligence models — known as frontier AI models — from causing catastrophic harm involving dozens of casualties or billion-dollar damages.
The bill in California, which passed the state Senate earlier this year, would require large developers of frontier AI systems to implement and disclose certain safety protocols used by the company to mitigate the risk of incidents contributing to the deaths of 50 or more people or damages amounting to more than $1 billion.
The bill, which is under consideration in the state Assembly, would also require developers to create a frontier AI framework that includes best practices for using the models. Developers would have to publish a transparency report that discloses the risk assessments used while developing the model.
In June, New York state lawmakers approved a similar measure; Democratic Gov. Kathy Hochul has until the end of the year to decide whether to sign it into law.
Under the measure, before deploying a frontier AI model, large developers would be required to implement a safety policy to prevent the risk of critical harm — including the death or serious injury of more than 100 people or at least $1 billion in damages — caused or enabled by a frontier model through the creation or use of large-scale weapons systems or through AI committing criminal acts.
Frontier AI models are large-scale systems that exist at the forefront of artificial intelligence innovation. These models, such as OpenAI’s GPT-5, Google’s Gemini Ultra and others, are highly advanced and can perform a wide range of tasks by processing substantial amounts of data. These powerful models also have the potential to cause catastrophic harm.
California legislators last year attempted to pass stricter regulations on large developers to prevent the catastrophic harms of AI, but Democratic Gov. Gavin Newsom vetoed the bill. He said in his veto message that it would apply “stringent standards to even the most basic functions” of large AI systems. He wrote that small models could be “equally or even more dangerous” and worried about the bill curtailing innovation.
Over the following year, the Joint California Policy Working Group on AI Frontier Models wrote and published its report on how to approach frontier AI policy. The report emphasized the importance of empirical research, policy analyses and balance between the technology’s benefits and risks.
Tech developers and industry groups have opposed the bills in both states. Paul Lekas, the senior vice president of global public policy at the Software & Information Industry Association, wrote in an emailed statement to Stateline that California’s measure, while intended to promote responsible AI development, “is not the way to advance this goal, build trust in AI systems, and support consumer protection.”
The bill would create “an overly prescriptive and burdensome framework that risks stifling frontier model development without adequately improving safety,” he said, the same problems that led to last year’s veto. “The bill remains untethered to measurable standards, and its vague disclosure and reporting mandates create a new layer of operational burdens.”
NetChoice, a trade association of online businesses including Amazon, Google and Meta, sent a letter to Hochul in June, urging the governor to veto New York’s proposed legislation.
“While the goal of ensuring the safe development of artificial intelligence is laudable, this legislation is constructed in a way that would unfortunately undermine its very purpose, harming innovation, economic competitiveness, and the development of solutions to some of our most pressing problems, without effectively improving public safety,” wrote Patrick Hedger, the director of policy at NetChoice.
Madyson Fitzgerald is a content producer and staff writer for Stateline. The article originally appeared in Stateline. Stateline is part of States Newsroom, the nation’s largest state-focused nonprofit news organization, with reporting from every capital.Stateline journalists aim to illuminate the big challenges and policy trends that cross state borders. You may subscribe to Stateline here.