Last Member Drive of 2025!

Your year-end tax-deductible gift powers our local newsroom. Help raise $1 million in essential funding for LAist by December 31.
$942,232 of $1,000,000 goal
A row of graphics payment types: Visa, MasterCard, Apple Pay and PayPal, and  below a lock with Secure Payment text to the right
Audience-funded nonprofit news
radio tower icon laist logo
Next Up:
0:00
0:00
Subscribe
  • Listen Now Playing Listen
News

Scared of artificial intelligence? New law forces makers to disclose disaster plans

A low angle view of a conference, where people stand out of focus in the foreground. There is signage that is partially visible that reads "... to value faster... AI, data, and... "
A new California law requires tech companies to disclose how they manage catastrophic risks from artificial intelligence systems.
(
Florence Middleton
/
CalMatters
)

Truth matters. Community matters. Your support makes both possible. LAist is one of the few places where news remains independent and free from political and corporate influence. Stand up for truth and for LAist. Make your year-end tax-deductible gift now.

Tech companies that create large, advanced artificial intelligence models will soon have to share more information about how the models can impact society and give their employees ways to warn the rest of us if things go wrong.

Starting January 1, a law signed by Gov. Gavin Newsom gives whistleblower protections to employees at companies like Google and OpenAI whose work involves assessing the risk of critical safety incidents. It also requires large AI model developers to publish frameworks on their websites that include how the company responds to critical safety incidents and assesses and manages catastrophic risk. Fines for violating the frameworks can reach $1 million per violation. Under the law, companies must report critical safety incidents to the state within 15 days, or within 24 hours if they believe a risk poses an imminent threat of death or injury.

The law began as Senate Bill 53, authored by state Sen. Scott Wiener, a Democrat from San Francisco, to address catastrophic risk posed by advanced AI models, which are sometimes called frontier models. The law defines catastrophic risk as an instance where the tech can kill more than 50 people through a cyber attack or hurt people with a chemical, biological, radioactive, or nuclear weapon, or an instance where AI use results in more than $1 billion in theft or damage. It addresses the risks in the context of an operator losing control of an AI system, for example because the AI deceived them or took independent action, situations that are largely considered hypothetical.

The law increases the information that AI makers must share with the public, including in a transparency report that must include the intended uses of a model, restrictions or conditions of using a model, how a company assesses and addresses catastrophic risk, and whether those efforts were reviewed by an independent third party.

The law will bring much-needed disclosure to the AI industry, said Rishi Bommasani, part of a Stanford University group that tracks transparency around AI. Only three of 13 companies his group recently studied regularly carry out incident reports and transparency scores his group issues to such companies fell on average over the last year, according to a newly issued report.

Trending on LAist

Bommasami is also a lead author of a report ordered by Gov. Gavin Newsom that heavily influenced SB 53 and calls transparency a key to public trust in AI. He thinks the effectiveness of SB 53 depends heavily on the government agencies tasked with enforcing it and the resources they are allocated to do so.

Sponsored message

“You can write whatever law in theory, but the practical impact of it is heavily shaped by how you implement it, how you enforce it, and how the company is engaged with it.”

The law was influential even before it went into effect. The governor of New York, Kathy Hochul, credited it as the basis for the AI transparency and safety law she signed Dec. 19. The similarity will grow, City & State New York reported, as the law will be “substantially rewritten next year largely to align with California’s language.”

Limitations and implementation

The new law falls short no matter how well it is enforced, critics say. It does not include in its definition of catastrophic risk issues like the impact of AI systems on the environment, their ability to spread disinformation, or their potential to perpetuate historical systems of oppression like sexism or racism. The law also does not apply to AI systems used by governments to profile people or assign them scores that can lead to a denial of government services or fraud accusations, and only targets companies that make $500 million in annual revenue.

Its transparency measures also stop short of full public visibility. In addition to providing the transparency reports, AI developers must also send incident reports to the Office of Emergency Services when things go wrong. Members of the public can also contact that office to report catastrophic risk incidents.

But the contents of incident reports submitted to OES by companies or their employees cannot be provided to the public via records requests and will be shared instead with members of the California Legislature and Newsom. Even then, they may be redacted to hide information that companies characterize as trade secrets, a common way companies prevent sharing information about their AI models.

Bommasami hopes additional transparency will be provided by Assembly Bill 2013, a bill that became law in 2024 and also takes effect Jan. 1. It requires companies to disclose additional details about the data they use to train AI models.

Sponsored message

Some elements of SB 53 don’t kick in until next year. Starting in 2027, the Office of Emergency Services will produce a report about critical safety incidents the agency receives from the public and large frontier model makers. That report may give more clarity into the extent to which AI can mount attacks on infrastructure or models act without human direction, but the report will be anonymized so which AI models pose this threat won’t be known to the public.

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

You come to LAist because you want independent reporting and trustworthy local information. Our newsroom doesn’t answer to shareholders looking to turn a profit. Instead, we answer to you and our connected community. We are free to tell the full truth, to hold power to account without fear or favor, and to follow facts wherever they lead. Our only loyalty is to our audiences and our mission: to inform, engage, and strengthen our community.

Right now, LAist has lost $1.7M in annual funding due to Congress clawing back money already approved. The support we receive before year-end will determine how fully our newsroom can continue informing, serving, and strengthening Southern California.

If this story helped you today, please become a monthly member today to help sustain this mission. It just takes 1 minute to donate below.

Your tax-deductible donation keeps LAist independent and accessible to everyone.
Senior Vice President News, Editor in Chief

Make your tax-deductible year-end gift today

A row of graphics payment types: Visa, MasterCard, Apple Pay and PayPal, and  below a lock with Secure Payment text to the right