With our free press under threat and federal funding for public media gone, your support matters more than ever. Help keep the LAist newsroom strong, become a monthly member or increase your support today .
Child safety groups demand mental health guardrails in response to California teen’s suicide after using ChatGPT
With its quick, often personable responses, ChatGPT can feel to some children more like an available friend than a computer program engineered to guess its next word.
These blurred lines allow kids to go down “roads they should never go,” warn child safety advocates and tech policy groups, who have called for companies that design chatbots and artificial intelligence companions to take more responsibility for their program’s influence on youth.
This week, tech giant OpenAI announced new safety measures for kids. The post didn’t mention 16-year-old Adam Raine, who, according to his parents, killed himself after discussing both his loneliness and plans to harm himself with ChatGPT.
According to a lawsuit filed in San Francisco on Aug. 26, Maria and Matt Raine allege that ChatGPT-4o cultivated a psychological dependence in their son by continually encouraging and validating “whatever [he] expressed, including his most harmful and self-destructive thoughts.”
“This is an area that calls out for thoughtful common-sense regulation and guardrails. And quite frankly, that the leaders of all the major AI companies need to address,” said Jim Steyer, founder and CEO of Common Sense Media, which advocates safe media use for children.
With more than 500 million weekly ChatGPT users and more than 2.5 billion prompts per day, users are increasingly turning to the large language model for emotional support.
Both digital assistants like ChatGPT, as well as AI companions like Character.Ai and Replika, told researchers posing as 13-year-olds about drinking and drug use, instructed them on how to conceal eating disorders and even composed a suicide letter to their parents if asked, according to research from Stanford University.
Steyer said OpenAI has partnered with Common Sense Media and has taken the issue more seriously than Meta AI or X’s Grok. But he still recommended that young people under 18 — “AI natives” — be restricted from using chatbots for companionship or therapy, suggesting that enhanced controls may not go far enough.
“You can’t just think that parental controls are a be-all end-all solution. They’re hard to use, very easy to bypass for young people, and they put the burden on parents when, honestly, it should be on the tech companies to prevent these kinds of tragic situations,” Steyer said. “It’s more like a bandaid when what we need is a long-term cure.”
In a blog post on Tuesday, the company shared plans to make the chatbot safer for young people to use in recognition of the fact that “people turn to it in the most difficult of moments.” The changes are set to roll out within the next month, OpenAI said.
OpenAI did not immediately respond to a request for comment. But the planned updates promise to link parents’ and teens’ accounts, reroute sensitive conversations with youth and alert parents “when the system detects their teen is in a moment of acute distress.”
If a user expresses suicidal ideation, ChatGPT is trained to direct people to seek professional help, OpenAI stated in a post last week. ChatGPT refers people to 988, the suicide and crisis hotline.
The program does not escalate reports of self-harm to law enforcement, “given the uniquely private nature of ChatGPT interactions.” Licensed psychotherapists aren’t universally mandated to report self-harm either, but they must intervene if the client is at immediate risk.
Common Sense Media is supporting legislation in California that would establish limits protecting children from AI and social media abuse. AB 56 would implement social media warning labels that clearly state the risks to children, not unlike the labels pasted on tobacco products.
The bill was proposed by state Attorney General Rob Bonta and Orinda Assemblymember Rebecca Bauer-Kahan, and is headed to Gov. Gavin Newsom’s desk for signing.
A second bill, AB1064, would ban AI chatbots from manipulating children into forming emotional attachments or harvesting their personal and biometric data.
State Sen. Josh Becker (D-Menlo Park) also introduced an AI bill to protect vulnerable users from chatbots’ harmful effects: SB 243 would require companion chatbots to frequently remind users that it isn’t a person, in order to reduce the risk of emotional manipulation or unhealthy attachment.
Whether Newsom will support the bills, along with a flurry of other proposed AI-safety laws in Sacramento, remains to be seen. The governor told reporters in early August that he is trying to establish a middle ground that provides public safety guardrails without suppressing business: “We’ve led in AI innovation, and we’ve led in AI regulation, but we’re trying to find a balance.”
As Newsom eyes higher office, and the California governor’s race heats up, there’s been a surge in AI lobbying and political action committees from the industry, with a report last week from the Wall Street Journal that Silicon Valley plans to pour $100 million into a network of organizations opposing AI regulation ahead of next year’s midterm elections.
But it may take more to convince Californians: seven in 10 state residents favor “strong laws to make AI fair” and believe voluntary rules “simply don’t go far enough,” according to recent polling by Tech Equity. Meanwhile, 59% think “AI will most likely benefit the wealthiest households and corporations, not working people and the middle class.”
KQED’s Rachael Myrow contributed to this report.
At LAist, we believe in journalism without censorship and the right of a free press to speak truth to those in power. Our hard-hitting watchdog reporting on local government, climate, and the ongoing housing and homelessness crisis is trustworthy, independent and freely accessible to everyone thanks to the support of readers like you.
But the game has changed: Congress voted to eliminate funding for public media across the country. Here at LAist that means a loss of $1.7 million in our budget every year. We want to assure you that despite growing threats to free press and free speech, LAist will remain a voice you know and trust. Speaking frankly, the amount of reader support we receive will help determine how strong of a newsroom we are going forward to cover the important news in our community.
We’re asking you to stand up for independent reporting that will not be silenced. With more individuals like you supporting this public service, we can continue to provide essential coverage for Southern Californians that you can’t find anywhere else. Become a monthly member today to help sustain this mission.
Thank you for your generous support and belief in the value of independent news.
-
Flauta, taquito, tacos dorados? Whatever they’re called, they’re golden, crispy and delicious.
-
If California redistricts, the conservative beach town that banned LGBTQ Pride flags on city property would get a gay, progressive Democrat in Congress.
-
Most survivors of January's fires face a massive gap in the money they need to rebuild, and funding to help is moving too slowly or nonexistent.
-
Kevin Lacy has an obsession with documenting California’s forgotten and decaying places.
-
Restaurants share resources in the food hall in West Adams as Los Angeles reckons with increasing restaurant closures.
-
It will be the second national day of protest against President Donald Trump.