Sponsored message
Audience-funded nonprofit news
radio tower icon laist logo
Next Up:
0:00
0:00
Subscribe
  • Listen Now Playing Listen
News

California regulator weakens AI rules, giving Big Tech more leeway to track you

Ashkan Soltani, a man with light skin tone wearing a blue suit and tie, is speaking on a panel, sitting next to another person who is out of frame.
The former director of California's privacy regulator, Ashkan Soltani, is among the leaders who left the agency in recent months.
(
Fred Greaves
/
CalMatters
)

Truth matters. Community matters. Your support makes both possible. LAist is one of the few places where news remains independent and free from political and corporate influence. Stand up for truth and for LAist. Make your year-end tax-deductible gift now.

California’s first-in-the-nation privacy agency is retreating from an attempt to regulate artificial intelligence and other forms of computer automation.

The California Privacy Protection Agency was under pressure to back away from rules it drafted. Business groups, lawmakers, and Gov. Gavin Newsom said they would be costly to businesses, potentially stifle innovation, and usurp the authority of the legislature, where proposed AI regulations have proliferated. In a unanimous vote last week, the agency’s board watered down the rules, which impose safeguards on AI-like systems.

Agency staff estimate that the changes reduce the cost for businesses to comply in the first year of enforcement from $834 million to $143 million and predict that 90% percent of businesses initially required to comply will no longer have to do so.

The retreat marks an important turn in an ongoing and heated debate over the board’s role. Created following the passage of state privacy legislation by lawmakers in 2018 and voters in 2020, the agency is the only body of its kind in the United States.

The draft rules have been in the works for more than three years, but were revisited after a series of changes at the agency in recent months, including the departure of two leaders seen as pro-consumer, including Vinhcent Le, a board member who led the AI rules drafting process, and Ashkan Soltani, the agency’s executive director.

More news

Consumer advocacy groups worry that the recent shifts mean the agency is deferring excessively to businesses, particularly tech giants.

Sponsored message

The changes approved last week mean the agency’s draft rules no longer regulate behavioral advertising, which targets people based on profiles built up from their online activity and personal information. In a prior draft of the rules, businesses would have had to conduct risk assessments before using or implementing such advertising.

Behavioral advertising is used by companies like Google, Meta, and TikTok and their business clients. It can perpetuate inequality, pose a threat to national security, and put children at risk.

The revised draft rules also eliminate use of the phrase “artificial intelligence” and narrow the range of business activity regulated as “automated decisionmaking,” which also requires assessments of the risks in processing personal information and the safeguards put in place to mitigate them.

Supporters of stronger rules say the narrower definition of “automated decisionmaking” allows employers and corporations to opt out of the rules by claiming that an algorithmic tool is only advisory to human decision making.

“My one concern is that if we’re just calling on industry to identify what a risk assessment looks like in practice, we could reach a position by which they’re writing the exam by which they’re graded,” said board member Brandie Nonnecke during the meeting.

“The CPPA is charged with protecting the data privacy of Californians, and watering down its proposed rules to benefit Big Tech does nothing to achieve that goal,“ said Sacha Haworth, executive director of Tech Oversight Project, an advocacy group focused on challenging policy that reinforces Big Tech power, said in a statement to CalMatters. “By the time these rules are published, what will have been the point?”

The draft rules retain some protections for workers and students in instances when a fully automated system determines outcomes in finance and lending services, housing, and health care without a human in the decisionmaking loop.

Sponsored message

Businesses and the organizations that represent them made up 90% of comments about the draft rules before the agency held listening sessions across the state last year, Soltani said in a meeting last year.

In April, following pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights, and privacy groups wrote a letter together urging the agency to continue work to regulate AI and protect consumers, students, and workers.

With each iteration they’ve gotten weaker and weaker.
— Kara Williams, law fellow, Electronic Privacy Information Center, on draft AI rules from California's privacy regulator

Roughly a week later, Gov. Newsom intervened, sending the agency a letter stating that he agreed with critics that the rules overstepped the agency’s authority and supported a proposal to roll them back.

Newsom cited Proposition 24, the 2020 ballot measure that paved the way for the agency. “The agency can fulfill its obligations to issue the regulations called for by Proposition 24 without venturing into areas beyond its mandate,” the governor wrote.

The original draft rules were great, said Kara Williams, a law fellow at the advocacy group Electronic Privacy Information Center. On a phone call ahead of the vote, she added that ”with each iteration they’ve gotten weaker and weaker, and that seems to correlate pretty directly with pressure from the tech industry and trade association groups so that these regulations are less and less protective for consumers.”

The public has until June 2 to comment on the alteration to draft rules. Companies must comply with automated decisionmaking rules by 2027.

Sponsored message

Prior to voting to water down its own regulation last week, at the same meeting the agency board voted to throw its support behind four draft bills in the California Legislature, including one that protects the privacy of people who connect computing devices to their brain and another that prohibits the collection of location data without permission.

This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

You come to LAist because you want independent reporting and trustworthy local information. Our newsroom doesn’t answer to shareholders looking to turn a profit. Instead, we answer to you and our connected community. We are free to tell the full truth, to hold power to account without fear or favor, and to follow facts wherever they lead. Our only loyalty is to our audiences and our mission: to inform, engage, and strengthen our community.

Right now, LAist has lost $1.7M in annual funding due to Congress clawing back money already approved. The support we receive before year-end will determine how fully our newsroom can continue informing, serving, and strengthening Southern California.

If this story helped you today, please become a monthly member today to help sustain this mission. It just takes 1 minute to donate below.

Your tax-deductible donation keeps LAist independent and accessible to everyone.
Senior Vice President News, Editor in Chief

Make your tax-deductible year-end gift today

A row of graphics payment types: Visa, MasterCard, Apple Pay and PayPal, and  below a lock with Secure Payment text to the right