Support for LAist comes from
Local and national news, NPR, things to do, food recommendations and guides to Los Angeles, Orange County and the Inland Empire
Stay Connected
Listen

The Brief

The most important stories for you to know today
  • The L.A. Report
    Listen 8:57
    Dodgers meltdown, CA GOP convention, remembering Vicky Tafoya — Sunday Edition
Jump to a story
  • Some say new protections for kids are not enough
    An iPhone sits on top of an open laptop. On the homescreen of the iPhone is a logo of a flower made up of swirling lines and the word "ChatGPT"
    San Francisco-based OpenAI announced that it will introduce parental controls and better responses to users in distress.

    Topline:

    This week, tech giant OpenAI announced new safety measures for kids. Common Sense Media, which advocates safe media use for children, has partnered with Open AI, but recommends that young people under 18 — “AI natives” — be restricted from using chatbots for companionship or therapy, suggesting that enhanced controls may not go far enough.

    About the safety measures: In a blog post on Tuesday, the company shared plans to make the chatbot safer for young people to use in recognition of the fact that “people turn to it in the most difficult of moments.” The changes are set to roll out within the next month. The planned updates promise to link parents’ and teens’ accounts, reroute sensitive conversations with youth and alert parents “when the system detects their teen is in a moment of acute distress.”

    The backstory: According to a lawsuit filed in San Francisco on Aug. 26, Maria and Matt Raine allege that ChatGPT-4o cultivated a psychological dependence in their son by continually encouraging and validating “whatever [he] expressed, including his most harmful and self-destructive thoughts.” 16-year-old Adam Raine, according to his parents, killed himself after discussing both his loneliness and plans to harm himself with ChatGPT.

    With its quick, often personable responses, ChatGPT can feel to some children more like an available friend than a computer program engineered to guess its next word.

    These blurred lines allow kids to go down “roads they should never go,” warn child safety advocates and tech policy groups, who have called for companies that design chatbots and artificial intelligence companions to take more responsibility for their program’s influence on youth.

    This week, tech giant OpenAI announced new safety measures for kids. The post didn’t mention 16-year-old Adam Raine, who, according to his parents, killed himself after discussing both his loneliness and plans to harm himself with ChatGPT.

    According to a lawsuit filed in San Francisco on Aug. 26, Maria and Matt Raine allege that ChatGPT-4o cultivated a psychological dependence in their son by continually encouraging and validating “whatever [he] expressed, including his most harmful and self-destructive thoughts.”

    “This is an area that calls out for thoughtful common-sense regulation and guardrails. And quite frankly, that the leaders of all the major AI companies need to address,” said Jim Steyer, founder and CEO of Common Sense Media, which advocates safe media use for children.

    With more than 500 million weekly ChatGPT users and more than 2.5 billion prompts per day, users are increasingly turning to the large language model for emotional support.

    Both digital assistants like ChatGPT, as well as AI companions like Character.Ai and Replika, told researchers posing as 13-year-olds about drinking and drug use, instructed them on how to conceal eating disorders and even composed a suicide letter to their parents if asked, according to research from Stanford University.

    Steyer said OpenAI has partnered with Common Sense Media and has taken the issue more seriously than Meta AI or X’s Grok. But he still recommended that young people under 18 — “AI natives” — be restricted from using chatbots for companionship or therapy, suggesting that enhanced controls may not go far enough.

    “You can’t just think that parental controls are a be-all end-all solution. They’re hard to use, very easy to bypass for young people, and they put the burden on parents when, honestly, it should be on the tech companies to prevent these kinds of tragic situations,” Steyer said. “It’s more like a bandaid when what we need is a long-term cure.”

    In a blog post on Tuesday, the company shared plans to make the chatbot safer for young people to use in recognition of the fact that “people turn to it in the most difficult of moments.” The changes are set to roll out within the next month, OpenAI said.

    OpenAI did not immediately respond to a request for comment. But the planned updates promise to link parents’ and teens’ accounts, reroute sensitive conversations with youth and alert parents “when the system detects their teen is in a moment of acute distress.”

    If a user expresses suicidal ideation, ChatGPT is trained to direct people to seek professional help, OpenAI stated in a post last week. ChatGPT refers people to 988, the suicide and crisis hotline.

    The program does not escalate reports of self-harm to law enforcement, “given the uniquely private nature of ChatGPT interactions.” Licensed psychotherapists aren’t universally mandated to report self-harm either, but they must intervene if the client is at immediate risk.

    Common Sense Media is supporting legislation in California that would establish limits protecting children from AI and social media abuse. AB 56 would implement social media warning labels that clearly state the risks to children, not unlike the labels pasted on tobacco products.

    The bill was proposed by state Attorney General Rob Bonta and Orinda Assemblymember Rebecca Bauer-Kahan, and is headed to Gov. Gavin Newsom’s desk for signing.

    A second bill, AB1064, would ban AI chatbots from manipulating children into forming emotional attachments or harvesting their personal and biometric data.

    State Sen. Josh Becker (D-Menlo Park) also introduced an AI bill to protect vulnerable users from chatbots’ harmful effects: SB 243 would require companion chatbots to frequently remind users that it isn’t a person, in order to reduce the risk of emotional manipulation or unhealthy attachment.

    Whether Newsom will support the bills, along with a flurry of other proposed AI-safety laws in Sacramento, remains to be seen. The governor told reporters in early August that he is trying to establish a middle ground that provides public safety guardrails without suppressing business: “We’ve led in AI innovation, and we’ve led in AI regulation, but we’re trying to find a balance.”

    As Newsom eyes higher office, and the California governor’s race heats up, there’s been a surge in AI lobbying and political action committees from the industry, with a report last week from the Wall Street Journal that Silicon Valley plans to pour $100 million into a network of organizations opposing AI regulation ahead of next year’s midterm elections.

    But it may take more to convince Californians: seven in 10 state residents favor “strong laws to make AI fair” and believe voluntary rules “simply don’t go far enough,” according to recent polling by Tech Equity. Meanwhile, 59% think “AI will most likely benefit the wealthiest households and corporations, not working people and the middle class.”

    KQED’s Rachael Myrow contributed to this report.

Loading...