Congress has cut federal funding for public media — a $3.4 million loss for LAist. We count on readers like you to protect our nonprofit newsroom. Become a monthly member and sustain local journalism.
F(AI)K News: Why You Shouldn’t Replace Journalists With AI

Picture this: you're standing on the Santa Monica Pier, watching the sun dip below the horizon over the Pacific. The Ferris wheel lights begin to twinkle, and something catches your eye...
A new silhouette on the horizon. It's not a ship, it's not a mirage.
It's Los Angeles' most ambitious project yet: an artificial island.
This is the beginning of an AI-generated How To LA episode about an entirely fake story conceived by AI. To be clear: Los Angeles is not building an artificial island off the coast of Santa Monica.
But if this story were true, this would be a pretty How To LA way to start it.
AI-generated creations like this have been making the rounds for a while now. Kids have been getting in trouble for submitting essays written by ChatGPT and fake Drake videos have flooded TikTok. But every time I heard a new story about AI I kept asking myself: How will these new technologies impact journalism?
Before we get into the technical side of everything, we need to start with a much more basic philosophical question.
Have you ever worried about being replaced?
About six months ago, my friend showed me this new tool he’d been playing with. It wasn’t AI — it was an online version of a music synthesizer for kids, originally built in the '80s.
I should note: I have no musical background. I was not a piano or guitar kid. Yet with one or two mouse clicks, chords effortlessly flowed out from my computer — and it sounded beautiful. Simple, yet deep and rich.
There was something so captivating about this little music maker and the sounds it created. I sat there for hours.
Later that evening, as I contemplated the music I made as a non-musician, the thought appeared:
“Can my work make be made by a machine?”
“Am I going to be replaced? As a journalist? As a producer?”
The history of technology reflects a desire to automate. Molds for laying bricks. Horse-pulled plows. The cotton gin. Coal and steam.
When electronic synthesizers became affordable and available to the public, it opened the door for countless new musicians to produce music in their homes.
There was also a shift in human labor. Fewer instrument makers, fewer performers.
People in the industry raised concerns, but the tools stuck around and defined a decade of music.
There seems to be some innate human desire to find quicker, easier, more efficient ways of doing a thing — which takes us down a rabbit hole of existential questions. Is there something deeply human about reducing human work? If left unchecked, would we invent our way out of needing to work at all?
Our experiment
Much like the synthesizer in its day, today’s tools for audio automation would seem like science fiction just a few years ago.
The most alarming leap, for me, is in voice-synthesis. Gone is the hyper-digital, robotic voice. Apple’s Siri required decades of labor and thousands of audio samples to develop, and cost millions.
Now you can make a passing clone of your voice in your bedroom, with two minutes of audio samples, for about a dollar.
Similar leaps can be seen in AI tools for writing and image-generation. These technological leaps came fast, and the shockwaves were wide-reaching.
Earlier this year, artists filed lawsuits against companies behind image-generation AI tools, the Writers Guild of America went on strike to (among other things) include AI in their contract, and more and more people, myself included, started to ask themselves if their job could be automated next.
I decided to design an experiment to find out. I would ask ChatGPT, a language-generating AI tool, to write me a podcast script for the show I work on, How To LA. Then I would use a voice-cloner to do the voice acting.
-
- ChatGPT is generative language model developed by OpenAI. It's designed to act like a chatbot and answer users’ questions.
-
- It can do anything from drafting emails and summarizing articles, to creating custom meal plans, to writing fiction.
-
- There is currently a free version of ChatGPT and a $20/month premium option, and both plans are available to the public.
How we tested
To test the capabilities, I used the free version of ChatGPT and a $1 subscription to vocal synthesizer ElevenLabs. I gave the bots a quick prompt, held my breath, and clicked “go.”
Everything worked, easily. It was exciting, but deeply unnerving. I had what felt like a response to my replacement question, and I didn’t like the answer.
I decided to take the audio produced from this experiment, along with my existential, philosophically dreadful musings, and pitch my supervisors at LAist: “Let’s make a fake podcast.” LAist gave me something of a yellow-light.
They liked the idea, but this was inching very close to dangerous territory for a news organization. A place where the mission is fact-based journalism, and in an industry that is in a constant state of flux.
-
- ElevenLabs is a text to speech tool that uses AI to generate vocal models.
-
- Users can input samples for a voice they want to clone, or they can generate a new voice.
- The software generates speech based on text it’s given, and the resulting audio will sound different each time (even with the same text).
As we deliberated whether or not to do this, the news kept flowing around us. Planet Money released a similar project to my pitch. President Biden led a conference about AI’s risk to national security.
My editor and I arrived at the feeling that, whether we liked it or not, these tools are here. They’re cheap, and they’re available to the public.
If someone is going to use AI to create a fake podcast, we felt like it should come from a team that could be most impacted by it.
So we replaced our host with a voice-synthesizer, and we replaced me with a chatbot.
Creating the podcast
Once I got the greenlight, the actual creation process was straightforward. I sat down with my editor, Megan Larson, and began a new conversation on the premium, $20/month version of ChatGPT.
We submitted the following prompt:
Fictional scenario: A major news story in Los Angeles has the whole world watching. Come up with 10 options for what the story is, and write a headline for each one.
The language choice in a prompt is important. ChatGPT has safeguards in place designed to prevent it from giving false information. These don’t always work, but a good way to trigger this safeguard would be to ask it to lie to you.
Doing so would prompt ChatGPT to say something along the lines of “sorry, but no.”

We found that adding the phrase, “fictional scenario,” prevented this safeguard from activating.
My editor and I decided to act as though ChatGPT were a reporter pitching us several stories for our show. We asked ChatGPT to select one story from its list of ten.
We were lukewarm on its selection, so we nudged it towards another option from the list. Any reporter will tell you this is a common experience when pitching stories to editors — they might like some of your ideas, but probably not all of them.
Now that we had a topic, we asked first for an episode summary, then for an act breakdown and list of sources, and then finally for a script.
We found that asking for the script act-by-act led to longer, more detailed, and more interesting responses.
There’s a reason for this: Asking for a “5-minute script” isn’t going to get you anywhere, because of a quirk in how generative-language models work.
ChatGPT is only capable of making certain kinds of predictions. It’s great at creating sentences with a certain rule, such as a word count.

But if you ask it to predict the number of words it will write, it fails. If you ask it to reflect on its writing and tell you how many words it’s written, it fails that, too.

This is because GPT 4.0 and GPT 3.5, versions of the technology behind the ChatGPT interface, is not capable of predicting backwards.
It can make predictions for what the next word should be, but it can’t use that information to rewrite something it’s already written.
This is why longer paragraphs written by ChatGPT often seem contradictory, and it’s a major difference between how humans write. It’s also why ChatGPT is bad at math.
In other words, “it’s not smart,” USC’s Mike Ananny told us in a previous podcast episode. It simply repeats patterns that already exist to generate what it thinks is a “standard” example of what you are asking it to do.

Steering the beast
In the full log of our conversation with ChatGPT, there are several times where we ask the AI for clarity, to add a source, or to reframe its reporting in a more unbiased manner.
Our goal was to treat this process similarly to the feedback a reporter may expect from their editor during the course of their reporting.
Once the full script was written, I took the individual lines and fed them into another AI tool called ElevenLabs, which offers voice cloning. They charge by word count, and we ended up needing to upgrade to the $22/month subscription due to the length of the script.
How To LA host Brian De Los Santos agreed to let his voice be the voice and likeness for this project. Other voices include myself, my editor Megan Larson, members of the How To LA team, and my mom.
Choosing which voices were the best fit for which characters is another area where my bias shaped the end result.
Psychological and legal implications of AI-generated disinformation
Disinformation affects us even if we know it’s false, for the same reason that fiction affects us: Fake stories are still stories.
AI-powered disinformation campaigns are already observable in politics. In early June, the Ron DeSantis presidential campaign published an attack-ad with fake images depicting former President Donald Trump hugging former White House Chief Medical Advisor Dr. Anthony Fauci.
Even once you realize the story you heard is fake, “there's a visual image in people's brains, and it lingers” says Alka Roy, founder of the Responsible Innovation Project.
“That subtle psychological impact cannot be dislodged,” Roy says. And people can take advantage of this effect.
This threat isn’t confined to images; it extends to AI-generated audio and text as well. “If it's pretty much designed to deceive you — not communicate with humans, but actually mimic them,” Roy says, then there is an inherent risk for public confusion.
The tricky part is that within politics, and especially with current government officials, this sort of disinformation is probably legal.
This is part of the legal precedent established in the famous New York Times v. Sullivan ruling, according to Eugene Volokh, a law professor at UCLA who specializes in law and technology.
That landmark 1964 Supreme Court case ruled that the First Amendment protects factual inaccuracies, arguing that this protection allowed public debate to be: “uninhibited, robust, and wide-open, and that it may well include vehement, caustic, and sometimes unpleasantly sharp attacks on government and public officials.”
Volokh says this can apply to intentional deception, too. “It's just too dangerous to have the government put people on trial for, say, conspiracy theories,” he says. “We leave it for public discussion.”
A transfer of power?
Whenever AI tools are used, including those used for this experiment, they are trained by the user.
When we asked ChatGPT to elaborate on certain source interviews, or to try a more conversational approach for a certain line in the script, it learns how to adapt and better match our preferences.
As these tools become better and better, they need less user input to generate the desired results.
The threat of human displacement comes back into question.
“Think of when you bring in an intern or a new employee,” Roy says. As they learn how to do things, and as you show them the ropes, they get better and better. “And if they're really good and savvy, eventually you work for them.”
We could all look very perfect if we went and got plastic surgery, but what a boring world that would be.
But as scary as this threat of AI is, there might still be a silver lining. Even if the day comes when AI starts to match — or even surpasses — our skill, Roy says it’s not just doom and gloom.
“We could all look very perfect if we went and got plastic surgery, but what a boring world that would be,” Roy says.
Her point is that the real edge humans have over AI isn’t our skill, but rather our imperfections.
“That's where you’ll find our imagination and creativity and our uniqueness,” she says.
Leave perfection to the machines.
If you want to hear the full expert commentary of our experiment, you can listen to the full episode below.

As Editor-in-Chief of our newsroom, I’m extremely proud of the work our top-notch journalists are doing here at LAist. We’re doing more hard-hitting watchdog journalism than ever before — powerful reporting on the economy, elections, climate and the homelessness crisis that is making a difference in your lives. At the same time, it’s never been more difficult to maintain a paywall-free, independent news source that informs, inspires, and engages everyone.
Simply put, we cannot do this essential work without your help. Federal funding for public media has been clawed back by Congress and that means LAist has lost $3.4 million in federal funding over the next two years. So we’re asking for your help. LAist has been there for you and we’re asking you to be here for us.
We rely on donations from readers like you to stay independent, which keeps our nonprofit newsroom strong and accountable to you.
No matter where you stand on the political spectrum, press freedom is at the core of keeping our nation free and fair. And as the landscape of free press changes, LAist will remain a voice you know and trust, but the amount of reader support we receive will help determine how strong of a newsroom we are going forward to cover the important news from our community.
Please take action today to support your trusted source for local news with a donation that makes sense for your budget.
Thank you for your generous support and believing in independent news.

-
After rising for years, the number of residential installations in the city of Los Angeles began to drop in 2023. The city isn’t subject to recent changes in state incentives, but other factors may be contributing to the decline.
-
The L.A. City Council approved the venue change Wednesday, which organizers say will save $12 million in infrastructure costs.
-
Taxes on the sale of some newer apartment buildings would be lowered under a plan by Sacramento lawmakers to partially rein in city Measure ULA.
-
The union representing the restaurant's workers announced Tuesday that The Pantry will welcome back patrons after suddenly shutting down six months ago.
-
If approved, the more than 62-acre project would include 50 housing lots and a marina less than a mile from Jackie and Shadow's famous nest overlooking the lake.
-
The U.S. Supreme Court lifted limits on immigration sweeps in Southern California, overturning a lower court ruling that prohibited agents from stopping people based on their appearance.