Sponsored message
Logged in as
Audience-funded nonprofit news
radio tower icon laist logo
Next Up:
0:00
0:00
Subscribe
  • Listen Now Playing Listen
  • Listen Now Playing Listen

How LAist uses Artificial Intelligence

Our principles, commitments, and current practices

(Updated April 2026)

LAist is here to serve Los Angeles with high-quality news and information people can trust. As artificial intelligence tools become part of how newsrooms work, we want to be straightforward with you about how we use them, where we draw lines and who is accountable for what we publish.

These principles apply across LAist’s digital and audio journalism, our newsletters and social media, the audience-facing tools we build and our internal use of generative AI. It is a living document. As the technology changes and as we learn from our own experiments, we will update it.

Our principles

1. Accountability
LAist journalists are responsible for everything we publish. If an AI tool plays any role in our work, a human at LAist has reviewed it and that person is accountable for it under the same editorial standards as the rest of our journalism.

 2. Verification
We treat anything an AI tool produces as unverified. That’s because generative AI tools can make mistakes, fabricate details and reflect bias from the data they were trained on. For that reason, we base any use of AI in our journalism on originally reported materials and confirm facts through that original reporting and trusted sources before publication, the same way we always have.

3. Human voice and judgment
Our journalists make LAist what it is through their reporting, writing, editing and editorial judgment. AI does not replace our reporters, our editors, our hosts, or the relationships they build with the communities they cover. We do not use AI to generate whole stories, and we do not use AI to replicate the voice or likeness of any journalist or person.

4. Transparency
When AI plays a meaningful role in something we publish, we tell you. For example, in investigations that use AI to synthesize big data sets, we are transparent about its use and still require the conclusions to be authenticated through human review. When we build products that rely on AI, we will label them in context so you can see what is AI and what isn’t.

5. Privacy and protection of sources, audience and members
LAist staff members do not enter sensitive source material, unpublished reporting, or private information about our members and donors into public AI tools. We use approved, secure platforms for any work involving sensitive information. When our audience interacts with AI-powered tools, we apply the same data protection standards as we do across all of our products, including applicable privacy laws.

6. Service to Los Angeles
Public media exists to provide universal access. Where AI can help us serve more Angelenos — through translation, accessibility features, or making our reporting easier to find and use — we will explore it carefully, with human review and clear disclosure. The standard we hold ourselves to is whether a tool genuinely serves our audience, not whether it saves us time.

In the spirit of this document: an early draft of this document was developed with the help of an AI tool. We used AI to analyze public-facing AI practices in other public media newsrooms whose standards align with our own, including WBEZ, Texas Tribune, ProPublica, as well as additional human review of ethics guidelines from organizations including the Associated Press, Society of Professional Journalists (SPJ) and the American Journalism Project. The final version was written, edited and approved by people at LAist.

What AI won’t do at LAist

Some commitments are fundamental. We will not:

  • Publish whole stories generated by AI.
  • Use AI to replicate the voice of any LAist journalist, host, or person.
  • Mislead our audience by publishing AI-generated images that could be mistaken for original photography or original visual reporting. If an AI-generated image is the subject of the reporting, we clearly label it.
  • Mislead our audience by using AI to create art representing real, identifiable people, places, or events in our news coverage. If an AI-generated image is the subject of the reporting, we clearly label it.
  • Publish AI-assisted work without review by an LAist editor.
  • Enter sensitive source material or private audience and donor information into public AI tools.

These commitments reflect our current standards and principles. As AI technology evolves, specific practices may change, but our core commitment to human-driven journalism and transparency will not.

How we are using AI right now

We are taking a deliberate, hands-on approach. Our current uses fall into a few categories:

  • Internal newsroom support. We use approved tools to help with tasks like generating image alt text for accessibility, supporting editorial coaching and assisting with research workflows. Anything produced with AI assistance is reviewed by a journalist before it reaches our audience.
  • Translation. We are piloting AI-assisted translation in a limited way to make some of our reporting available in languages beyond English. When we use AI translation, we disclose it on the content itself so readers know what they are looking at, and we are mindful that translation for the communities we serve carries real responsibility.
  • Audience-facing tools. As we build interactive tools and features that use AI — such as products that help Angelenos navigate civic information — we will label them in context, explain what the AI does and doesn’t do, and ground them in our own reporting wherever possible.
  • Behind the scenes. Our product, marketing, fundraising and operations teams may use AI tools to help with tasks like drafting internal communications, analyzing audience data, enhancing our fundraising communications or building software. The same standards for accuracy, privacy and human review apply across all departments at LAist.

Oversight

LAist has an internal AI working group that reviews new AI tools and use cases before they are adopted, evaluates them for accuracy and bias, and updates our internal guidelines as the technology and our understanding of it evolve. These public principles reflect the values  that the group works from.

Our journalism and AI training

LAist’s reporting is the product of significant human effort and is protected by copyright. We reserve the right to decide whether and how third-party AI companies may use our journalism to train their models, and we will evaluate those decisions in line with our mission and our obligation to the journalists who do this work.

Tell us what you think

We know the questions around AI in journalism are not settled, and people in our audience and our newsroom hold different views. If you have feedback on these principles or on how you see AI showing up in our work, we want to hear it. You can reach us here.

This document was last updated April 24, 2026. We expect to revise it regularly.