Sponsored message
Logged in as
Audience-funded nonprofit news
radio tower icon laist logo
Next Up:
0:00
0:00
Subscribe
  • Listen Now Playing Listen
  • Listen Now Playing Listen

The Brief

The most important stories for you to know today
  • Steps to mitigate disturbing chatbot interactions

    Topline:

    Psychologists and online safety advocates say parents are right to be worried. Extended chatbot interactions may affect kids' social development and mental health, they say. And the technology is changing so fast that few safeguards are in place.

    Why it matters: Generative AI chatbots are a growing part of life for American teens. A survey by the Pew Research Center found that 64% of adolescents are using chatbots, with 3 in 10 saying they use them daily.

    Be aware of the risks: A new report from the online safety company, Aura, shows that 42% of adolescents using AI chatbots use them for companionship. Aura gathered data from the daily device use of 3,000 teens as well as surveys of families.

    Read on... for more tips from experts.

    It wasn't until a couple of years ago that Keri Rodrigues began to worry about how her kids might be using chatbots. She learned her youngest son was interacting with the chatbot in his Bible app — he was asking it some deep moral questions, about sin for instance.

    That's the kind of conversation that she had hoped her son would have with her and not a computer. "Not everything in life is black and white," she says. "There are grays. And it's my job as his mom to help him navigate that and walk through it, right?"

    Rodrigues has also been hearing from parents across the country who are concerned about AI chatbots' influence on their children. She is the president of the National Parents Union, which advocates for children and families. Many parents, she says, are watching chatbots claim to be their kids' best friends, encouraging children to tell them everything.

    Psychologists and online safety advocates say parents are right to be worried. Extended chatbot interactions may affect kids' social development and mental health, they say. And the technology is changing so fast that few safeguards are in place.

    The impacts can be serious. According to their parents' testimonies at a recent Senate hearing, two teens died by suicide after prolonged interactions with chatbots that encouraged their suicide plans.

    If you or someone you know may be considering suicide or be in crisis, call or text 988 to reach the 988 Suicide & Crisis Lifeline.

    But generative AI chatbots are a growing part of life for American teens. A survey by the Pew Research Center found that 64% of adolescents are using chatbots, with 3 in 10 saying they use them daily.

    "It's a very new technology," says Dr. Jason Nagata, a pediatrician and researcher of adolescent digital media use at the University of California San Francisco. "It's ever-changing and there's not really best practices for youth yet. So, I think there are more opportunities now for risks because we're still kind of guinea pigs in the whole process."


    And teenagers are particularly vulnerable to the risks of chatbots, he adds, because adolescence is a time of rapid brain development, which is shaped by experiences. "It is a period when teens are more vulnerable to lots of different exposures, whether it's peers or computers."

    But parents can minimize those risks, say pediatricians and psychologists. Here are some ways to help teens navigate the technology safely.

    1. Be aware of the risks

    A new report from the online safety company, Aura, shows that 42% of adolescents using AI chatbots use them for companionship. Aura gathered data from the daily device use of 3,000 teens as well as surveys of families.

    That includes some disturbing conversations involving violence and sex, says psychologist Scott Kollins, chief medical officer at Aura, who leads the company's research on teen interactions with generative AI.

    "It is role play that is [an] interaction about harming somebody else, physically hurting them, torturing them," he says.

    He says it's normal for kids to be curious about sex, but learning about sexual interactions from a chatbot instead of a trusted adult is problematic.

    And chatbots are designed to agree with users, says pediatrician Nagata. So if your child starts a query about sex or violence, "the default of the AI is to engage with it and to reinforce it."

    He says spending a lot of time with chatbots — having extended conversations — also prevents teenagers from learning important social skills, like empathy, reading body language and negotiating differences.

    "When you're only or exclusively interacting with computers who are agreeing with you, then you don't get to develop those skills," he says.

    And there are mental health risks. According to a recent study by researchers at the nonprofit research organization RAND, Harvard and Brown universities, 1 in 8 adolescents and young adults use chatbots for mental health advice.

    But there have been numerous reports of individuals experiencing delusions, or what's being referred to as AI psychosis, after prolonged interactions with chatbots. This, as well as the concern over risks of suicide, has led psychologists to warn that AI chatbots pose serious risks to the mental health and safety of teens as well as vulnerable adults.

    "We see that when people interact with [chatbots] over long periods of time, that things start to degrade, that the chatbots do things that they're not intended to do," says psychologist Ursula Whiteside, CEO of a mental health nonprofit called Now Matters Now. For example, she says, chatbots "give advice about lethal means, things that it's not supposed to do but does happen over time with repeated queries."

    2. Stay engaged with kids' online lives 

    Keep an open dialogue going with your child, says Nagata.

    "Parents don't need to be AI experts," he says. "They just need to be curious about their children's lives and ask them about what kind of technology they're using and why."

    And have those conversations early and often, says psychologist Kollins of Aura.

    "We need to have frequent and candid but nonjudgmental conversations with our kids about what this content looks like," says Kollins, who's also a father to two teenagers. "And we're going to have to continue to do that."

    He often asks his teens about what platforms they are on. When he hears about new chatbots through his own research at Aura, he also asks his kids if they have heard of those or used them.

    "Don't blame the child for expressing or taking advantage of something that's out there to satisfy their natural curiosity and exploration," he says.

    And make sure to keep the conversations open-ended, says Nagata: "I do think that that allows for your teenager or child to open up about problems that they've encountered."

    3. Develop digital literacy 

    It's also important to talk to kids about the benefits and pitfalls of generative AI. And if parents don't understand all the risks and benefits, parents and kids can research that together, suggests psychologist Jacqueline Nesi at Brown University, who was involved in the American Psychological Association's recent health advisory on AI and adolescent health.

    "A certain amount of digital literacy and literacy does need to happen at home," she says.

    It's important for parents and teens to understand that while chatbots can help with research, they also make errors, says Nagata. And it is important for users to be skeptical and fact-check.

    "Part of this education process for children is to help them to understand that this is not the final say," explains Nagata. "You yourself can process this information and try to assess, what's real or not. And if you're not sure, then try to verify with other people or other sources."

    4. Parental controls only work if kids set up their own accounts

    If a child is using AI chatbots, it may be better for them to set up their own account on the platforms, says Nesi, instead of using chatbots anonymously.

    "Many of the more popular platforms now have parental controls in place," she says. "But in order for those parental controls to be in effect, a child does need to have their own account."

    But be aware, there are dozens of different AI chatbots that kids could be using. "We identified 88 different AI platforms that kids were interacting with," says Kollins.

    This underscores the importance of having an open dialogue with your child to stay aware of what they're using.

    5. Set time limits

    Nagata also advises setting boundaries around when kids use digital technology, especially at nighttime.

    "One potential aspect of generative AI that can also lead to mental health and physical health impacts are [when] kids are chatting all night long and it's really disrupting their sleep," says Nagata. "Because they're very personalized conversations, they're very engaging. Kids are more likely to continue to engage and have more and more use."

    And if a child is veering toward overuse and misuse of generative AI, Nagata recommends that parents set time limits or limit certain kinds of content on chatbots.

    6. Seek help for more vulnerable teens 

    Kids who are already struggling with their mental health or social skills are more likely to be vulnerable to the risks of chatbots, says Nesi.

    "So if they're already lonely, if they're already isolated, then I think there's a bigger risk that maybe a chatbot could then exacerbate those issues," she says.

    And it's also important to keep an eye on potential warning signs of poor mental health, she notes.

    Those warning signs involve sudden and persistent changes in mood, isolation or changes in how engaged they are at school.

    "Parents should be as much as possible trying to pay attention to the whole picture of the child," says Nesi. "How are they doing in school? How are they doing with friends? How are they doing at home if they are starting to withdraw?"

    If a teen is withdrawing from friends and family and restricting their social interactions to just the chatbot, that too is a warning sign, she says. "Are they going to the chatbot instead of a friend or instead of a therapist or instead of responsible adults about serious issues?

    Also look for signs of dependence or addiction to a chatbot, she adds. "Are they having difficulty controlling how much they are using a chatbot? Like, is it starting to feel like it's controlling them? They kind of can't stop," she says.

    And if they see those signs, parents should reach out to a professional for help, says Nesi.

    "Speaking to a child's pediatrician is always a good first step," she says. "But in most cases, getting a mental health professional involved is probably going to make sense."

    7. The government has a role to play

    But, she acknowledges that the job of keeping children and teens safe from this technology shouldn't just fall upon parents.

    "There's a responsibility, you know, from lawmakers, from the companies themselves to make these products safe for teens."

    Lawmakers in Congress recently introduced bipartisan legislation to ban tech companies from offering companion apps for minors and to hold companies accountable for making available to minors companion apps that produce or solicit sexual content.

    If you or someone you know may be considering suicide or be in crisis, call or text 988 to reach the 988 Suicide & Crisis Lifeline.
    Copyright 2025 NPR

  • You asked us: Why are they there?
    A return envelope has a visible hole at top left. Envelope is addressed to the Registrar-Recorder/County Clerk
    An official ballot envelope for the 2026 primary election in Los Angeles.

    Topline:

    Have you noticed that the envelope for your mail-in ballot has holes in it? It turns out they have two functions (neither of which includes being able to see your votes inside).

    Accessibility: The two holes beside the signature line are there to help visually impaired people so they can sign their envelopes in private before submitting their ballot.

    Counting confirmation: They also help election officials confirm that the envelopes are empty when they’re processing the ballots to be counted.\

    When you sit down to fill out your mail-in ballot for the June 2 primary election (we have a guide for that, have you heard?), you may notice something curious on your ballot envelope.

    There are holes in it. Two small holes next to the signature line, and one on the other side.

    What’s the deal?

    This is a question an LAist reader asked our Voter Game Plan team:

    “Does the hole in the mail-in ballot have a specific see-through function?”

    It turns out the envelope holes have two functions. For one, the holes next to the signature line are supposed to help visually impaired people find the signature line so that they can sign their ballot in private before submitting it.

    And two: When election workers start processing the ballots to be counted, the holes help them confirm that the envelopes don’t still have ballots left inside.

    These holes have been part of the envelope design for many election cycles now — according to the L.A. County registrar’s office, they were included based on a recommendation from the nonprofit Center for Civic Design.

    Rest assured, they are not meant for anybody to be able to see your votes inside. Even if you try to make your vote visible, the holes just don’t line up.

    Don’t forget to check out our Voter Game Plan guides while you’re filling out your ballot.

    What questions do you have about this election?
    You ask, and we'll answer: Whether it's about how to interpret the results or track your ballot, we're here to help you understand the 2024 general election on Nov. 5.

  • Sponsored message
  • Katy Perry, Lisa will headline SoFi bash
    A large stadium is seen from across Lake Park in Inglewood, a sign that says "SoFi Stadium" can be seen in front of the stadium.
    SoFi Stadium will be home to FIFA World Cup 2026 games this summer.

    Topline:

    Katy Perry, Future, Blackpink's Lisa and other artists will headline the FIFA World Cup opening ceremony at SoFi Stadium on June 12 — one of three happening across North America.

    The selection: The lineup just announced "reflects the cultural diversity of the United States and the vibrancy of its many diasporas," FIFA President Gianni Infantino said in a statement.

    Why it matters: It's the first time the global competition will hold three opening ceremonies across multiple countries. Mexico City hosts one on June 11 and Toronto hosts another on June 12.

    What's next: Los Angeles will host eight games. The first match will take place on June 12 between the U.S. men’s national team and Paraguay. The opening ceremony will begin at 4:30 p.m., 90 minutes before kickoff.

    Tickets are available now through FIFA and will continue to be released throughout the tournament.

    Go deeper: Watch FIFA’s World Cup games with your fellow Angelenos across LA County

  • Record amount for breaking privacy law
    a parking lot full of chevrolet cars
    A Chevrolet Bolt EV sits parked in the sales lot at Stewart Chevrolet in Colma on April 25, 2023.

    Topline:

    General Motors agreed to pay $12.75 million in civil penalties for selling driving data of hundreds of thousands of California motorists to data brokers, allegedly without their consent.

    Background: It stemmed from an investigation by California Attorney General Rob Bonta, several county district attorneys, and the California Privacy Protection Agency, which enforces the privacy act. They said General Motors misled drivers who paid for the emergency roadside and navigation service OnStar and made approximately $20 million from the unlawful sale of their data between 2020 and 2024. The information included names, location information, driving behavior, and contact information, Bonta said, which went to the data brokers LexisNexis Risk Solutions and Verisk Analytics.

    Read on ... for more on GM's actions and the penalty.

    General Motors agreed to pay $12.75 million in civil penalties for selling driving data of hundreds of thousands of California motorists to data brokers, allegedly without their consent.

    The settlement, announced Friday, is the largest ever for violations of the California Consumer Privacy Act, a 2018 law that requires companies to tell consumers about how their data is shared and to respect requests to stop the sharing.

    It stemmed from an investigation by California Attorney General Rob Bonta, several county district attorneys, and the California Privacy Protection Agency, which enforces the privacy act. They said General Motors misled drivers who paid for the emergency roadside and navigation service OnStar and made approximately $20 million from the unlawful sale of their data between 2020 and 2024. The information included names, location information, driving behavior, and contact information, Bonta said, which went to the data brokers LexisNexis Risk Solutions and Verisk Analytics.

    “This trove of information included precise and personal location data that could identify the everyday habits and movements of Californians,” Bonta said in a press release.

    The settlement also requires GM to stop selling data to any consumer reporting agencies for five years and submit privacy assessments to the state, among other provisions. It followed a similar agreement between the Federal Trade Commission and GM earlier this year and California settlements with Honda and Ford over the past 14 months for their own violations of the privacy act.

    California’s investigation of GM began after a 2024 New York Times investigation found GM collected data about millions of drivers nationwide and sold it to insurance companies in order to charge the drivers higher premiums. Californians were not impacted by those premium hikes because a state law prohibits insurers from using driving data to set insurance rates, Bonta said.

    Bonta told CalMatters at a press conference Friday that it’s unclear if location data collected by General Motors was used by other companies to make predictions about the prices people are willing to pay for goods. That practice is better known as surveillance pricing and can leverage location data. Target paid $5 million to settle a suit from San Diego County’s district attorney over its alleged use of location for the technique. Bonta’s office began an investigation into the surveillance pricing practices of businesses in January.

    “I understand that there could be some overlap and maybe we'll discover something in our investigation in surveillance pricing, but that wasn't the focus of this case,” he said.

    Los Angeles District Attorney Nathan Hochman said the case started with one person finding location data in a report they requested about the data collected on them. That discovery, he added, led to investigations by journalists, prosecutors, and regulators.

    “This case shows more than anything that one consumer can make a huge difference,” he said.

    Though the settlement isn’t much compared to the $2.7 billion in net income that General Motors made last year, Hochman called it an indication that companies should expect higher penalties in the future. California reached a privacy law violation settlement with Disney in February for $2.75 million, previously the largest of its kind.

    In a statement shared with CalMatters, General Motors spokesperson Charlotte McCoy said, “This agreement addresses Smart Driver, a product we discontinued in 2024, and reinforces steps we’ve taken to strengthen our privacy practices. Vehicle connectivity is central to a modern and safe driving experience, which is why we’re committed to being clear and transparent with our customers about our practices and the choices and control they have over their information.”

    Californians will soon have a new protection against companies that use their data without their consent. Starting August 1, the more than 500 data brokers registered with the state must comply with requests California residents can make using an online tool known as the Delete Request and Opt-out Platform, or DROP. The privacy protection agency introduced the tool earlier this year.

    This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

  • No plans to reopen to the public
    two people pulling suitcases walk on the sidewalk by a chain-link fence with a lot of green trees around
    Pedestrians walk along Wilshire Boulevard adjacent to RFK Community Park in Koreatown that is currently fenced in April 22 in Los Angeles

    Topline:

    The Los Angeles Unified School District fenced off RFK Inspiration Park, located on Wilshire Boulevard. Nearly a year later, the district is considering reopening the space, but only to students at the adjacent RFK Community Schools.

    Why now? Enrique Legaspi, assistant principal at RFK Community Schools, said the school and the district are discussing using the park again, including for classes and student activities. LAUSD confirmed that school leaders have expressed strong interest in using the space for outdoor learning, art programs and student wellness activities.

    Background: For years, the city’s Department of Recreation and Parks operated and maintained the park under an agreement with the school district dating back to 2010. At the time, the public was allowed to use the space. Last March, the department stepped away. By then, it had already been taking on costs outside what the 2010 agreement required.

    Read on ... for more on the battle over the park.

    For nearly a year, people walking down Wilshire Boulevard in Koreatown have passed a small patch of what used to be one of the few public park spaces in the neighborhood. It’s now locked behind a tall chain link fence.

    Inside, the grass is overgrown and trash is piled up along the edges. The memorial to Sen. Robert F. Kennedy — built at the site where he was assassinated in 1968 at the Ambassador Hotel — has fallen into disrepair.

    The Los Angeles Unified School District fenced off RFK Inspiration Park, located on Wilshire Boulevard. Nearly a year later, the district is considering reopening the space, but only to students at the adjacent RFK Community Schools.

    That’s frustrating for some neighbors, who say the park used to belong to everyone.

    “I remember the park being open and suddenly a few months after, it was gated,” said Vanessa Aikens, who lives a few blocks away. “I was just wondering why they gated the area because there seemed to be a lot of people interacting with it.”

    There has been little information relayed to the community about why.

    “We have a number of our members who live right around there and so there’s an angle of access to green space, the access to a safe space for our homeless neighbors,” said Yuval Yossefy, treasurer of Ktown for All, an all-volunteer grassroots organization serving Koreatown’s unhoused community. “This went basically unnoticed.”

    Enrique Legaspi, assistant principal at RFK Community Schools, said the school and the district are discussing using the park again, including for classes and student activities. LAUSD confirmed that school leaders have expressed strong interest in using the space for outdoor learning, art programs and student wellness activities.

    Officials plan to involve the school community and nearby residents as plans take shape, but they have not given a timeline or said whether the park will reopen to the public.

    Koreatown lacks parks

    For years, the city’s Department of Recreation and Parks operated and maintained the park under an agreement with the school district dating back to 2010. At the time, the public was allowed to use the space.

    Last March, the department stepped away. By then, it had already been taking on costs outside what the 2010 agreement required.

    “RAP communicated uncertainty about its ability to sustain long-term maintenance due to staffing and funding constraints,” said Deirdra Boykin, a department spokesperson.

    For people who live nearby, the loss of the park has been simple and immediate: there’s nowhere else like it.

    “There are no parks around where I live,” Aikens said. “Now I just walk straight down the street.”

    In a neighborhood with such limited park space, the memorial park went relatively unnoticed.

    “There definitely isn’t enough green space here,” said Emere Alademir, 23, who lives nearby. “I’m originally from Toronto and everywhere they have green space.”

    People who never used the park say they would visit if it reopened.

    “I’ve never actually gone in but I would be open to coming here if it reopens,” said Wendy Kim, 70, who has lived in the neighborhood for 40 years. “Why not? It’s good for everyone.”

    Kim, who splits her time between LA and Seoul, said the parks in Seoul are much better maintained than the ones in LA, and that when she craves nature, she travels out of the city for a hike.

    “But every place is different and here, the homeless issue is out of hand. That’s just the reality,” she said.

    The fence goes up

    Public records obtained by Yossefy and reviewed by The LA Local show that city and LAUSD officials coordinated the park’s handoff around a May 22 encampment removal and cleanup, after which LAUSD took control of the site and moved forward with fencing it off. The emails do not explicitly state that the park was fenced because unhoused people were there, but they show encampment removal was a central part of the transition plan.

    Volunteers with Ktown For All, who do weekly outreach to the unhoused community in the area, said they were used to seeing people at the park every Saturday.

    “It’s just like all of a sudden the fence was there,” said Nicolas Emmons, who has been doing outreach near RFK since around 2021.

    Emmons and others said that while some unhoused residents stayed in the park, the majority of the park was open and available.

    “At its peak, it was only a small percentage of the park that was being used by people to live in,” he said. “Some of the people that lived there even took it upon themselves to clean the area around their setup.”

    Eunice Jeon, another volunteer with the organization, said they had built relationships with people there over several years.

    “We regularly saw people there and had built relationships with people there,” she said. “They respected and treated the park well.”

    Jeon added that despite restricting access, the closure has not visibly improved the space.

    “If anything I would say the park is in worse state ever since the fence has gone up despite nobody being in there,” she said.

    Jeon said many individuals she encountered were navigating complex barriers to housing and services, often caught in bureaucratic loops that made it difficult to access help.

    “A lot of the time they’re limited by transportation. Some services don’t allow certain things. They need an address, but in order to get something mailed, they need their driver’s license, which they don’t have because they don’t have an address,” she said.

    In email chains included in the public records, officials also discussed installing permanent wrought iron fencing at the site. When asked if that remains the plan, LAUSD said the project is still in the “planning phase” and that details, including potential site features, have not been finalized.

    “If the park is fenced off, nobody can access it. It doesn’t provide you any use,” Yoseffy said. “There are a number of people that can’t access this park, whether they were sleeping in this park, or they used the park to exercise, if they liked to sit and read — none of those things can happen there anymore because it’s completely closed off.”

    Public records show little evidence of public notice. One email mentions posting notices at the park ahead of the cleanup, but there was no formal announcement made to residents that the park — which had been open to the public for years — would be closed and no longer accessible.

    “I think that a public space is meant to be used by the public, including the unhoused,” Jeon said. “That’s something they need to address instead of locking up the parks. That’s a failure of the city. Kicking them out won’t keep anyone safer if they have fewer and fewer places to go.”

    LA Local reporter Marina Peña contributed to this report.