Congress has cut federal funding for public media — a $3.4 million loss for LAist. We count on readers like you to protect our nonprofit newsroom. Become a monthly member and sustain local journalism.
Social Media Companies Get 'Big Fat F' in Moderating Israel-Hamas War Content, Say Hate-Speech Watchers

A growing group of academics and civil discourse advocates are sounding the alarm over a surge in hate speech and disinformation on all major social media platforms as the Israel-Hamas war escalates.
Consider the most recent dramatic example, in the hours following the Oct. 17 air strike of a hospital in Gaza that killed scores of civilians. As journalists and respected investigative groups tried to make sense of the incident, social media exploded with unfounded accusations from Hamas and its supporters that the missile had been fired by Israel and had killed close to 500 people. They then cast doubt on subsequent evidence suggesting that the hospital was most likely hit by an errant rocket fired by Palestinian militants and that the death toll — while still strikingly high — was significantly lower than initially reported.
While incontrovertible confirmation of who perpetrated this particular tragedy may not come for some time — if ever — it’s clear that the chaotic online discourse around it further inflamed tensions.
Eroding trust
“It’s not just that there are fraudulent pieces of information out there. When the authentic pieces of information come out, we don’t know if we should trust it,” said Hany Farid, a UC Berkeley School of Information professor specializing in detecting manipulated media and deep fakes. “And that makes reasoning about what is happening really difficult. Nobody fundamentally knows what’s going on anymore, and that’s insane.”
-
Death toll and casualties
- Israeli officials report an attack by Hamas militants on Oct. 7 killed about 1,200 people. In addition, they say about 250 people were taken hostage, some have since been released.
- Gaza health officials have reported more than 25,000 Palestinians have been killed in Israeli airstrikes.
— NPR (Jan. 24)
Over the last year, major social media platforms have gutted their content moderation teams, a shift that many say is in part responsible for the proliferation of photos and videos of this war that turn out to be recycled from other conflicts — or are sometimes even clipped from video games.
“Let’s start with Twitter. (I refuse to call it X.) They just get a big fat F,” Farid said. “It is clear that Twitter has become more of a hellhole than it was pre-Musk, and it continues to decline.”
Since Elon Musk bought Twitter last year — and then changed its name to “X “— many observers say the social media platform, long influential among journalists, has increasingly become a de facto rebroadcaster of unfiltered war propaganda posted on even more loosely moderated, conspiracy-prone platforms like Telegram.
But if X gets an “F” from hate-speech watchers during this latest conflict, Meta, which owns Facebook, Instagram and WhatsApp and has considerably greater reach, gets something just north of F, said Callum Hood, head of research for the Center for Countering Digital Hate.
“If I know that one of the most popular posts on Facebook — according to data that I know they have access to, as well — is footage of an execution, with no warnings on it, at all, I have very serious concerns about what they’re doing,” he said.
In a statement to KQED, a Meta spokesperson pointed to a company blog post about its special operations center staffed with experts, including fluent Hebrew and Arabic speakers, “working around the clock to monitor our platforms while protecting people’s ability to use our apps to shed light on important developments happening on the ground.”
‘These are not new problems’
Content moderation is no easy task, especially when individuals with strong opinions post or repost factually inaccurate material, said Jillian York, director for international freedom of expression with the San Francisco-based Electronic Frontier Foundation. Last week, her group posted an open letter calling on social media companies to better handle misinformation, particularly during major international conflicts.
“These are not new problems,” York said. “We want platforms to ensure that their content moderation practices are transparent and consistent. We want them to sufficiently resource in every location in which they operate.”
Every researcher KQED spoke to also lamented the lack of federal regulation of social media platforms. They noted how, in contrast, the European Union’s Digital Services Act went into effect a couple of months ago, requiring large platforms to employ robust procedures to tackle systemic risks and abuse.
In a blog post, Meta acknowledged growing concerns among users that Facebook and Instagram appeared to be algorithmically curtailing the reach of certain posts, a technique known as “shadow banning.” The company characterized those incidents as “bugs,” which it says have since been fixed.
“This bug affected accounts equally around the globe – not only people trying to post about what’s happening in Israel and Gaza – and it had nothing to do with the subject matter of the content,” Meta said in its blog post.
But researchers say their ability to monitor what’s actually gaining traction on Meta’s platforms through the company’s application programming interfaces, or APIs, has been limited. Crowdtangle is another analytics tool researchers have found useful in monitoring content — one they say Meta bought but has failed to maintain.
“Facebook and Instagram is harder to study than ever. The truth is, I don’t think any organization has a very good grip on how disinformational hate is spreading on Facebook or Instagram right now because every possible tool that we once had for investigating it, they’re unusable,” Hood said. “Overall, maybe there’s less on these platforms, but we can’t actually say.”
According to Hood and other researchers, a similar lack of transparency makes it impossible to independently assess the efforts of Tiktok, which recently announced it launched a command center that brings together “key members” of its “40,000-strong global team of safety professionals,” and was working to remove posts that support or incite violence.
Hood and Farid, among many other observers, say these recent efforts are largely ineffective because they are overlaid on top of an ad-based business model designed to keep users on the platforms by promoting engaging content, regardless of its veracity.
‘Stop getting your information from social media’
“People should be angry that when they go online, they are being lied to. They are being manipulated by other people, by state-sponsored actors, and by the very platforms, and we are no longer informed citizens,” Farid said. “We’re not arguing about how to do something or if to do something. We’re arguing about 1 + 1 = 2.”
In contrast, Farid adds, most news organizations have structural incentives to try to get the facts right, even though a large proportion of Americans don’t trust them either. That is to say, journalists are concerned about maintaining their own credibility with news consumers and competitively assessing rivals’ news coverage to probe for weaknesses.
“When things are unfolding as fast as they are, stop getting your information from social media,” he said. “I’m not saying that The Washington Post and The New York Times and the San Francisco Chronicle always get it right. But at least they’re trying to get it right. And you can’t say that about social media.”
Farid says he finds hope for the future in emerging content authentication protocols and technologies. He points to new efforts like the Coalition for Content Provenance and Authenticity (C2PA), an alliance between Adobe, Intel, Microsoft and other major tech companies to develop technical standards for certifying the provenance of media content.
“So if I am in Gaza, and I film the bombing of a hospital, I can now verify when that was taken, who took it, where it was taken, and what was recorded,” Farid said. “That technology, we know how to do it. It just has to get deployed.”
-
The history of this region is both complicated and fraught. Here is some context about what led up to the most recent attacks and counterattacks.
-
NPR's Aya Batrawy and Daniel Estrin called the initial attack "one of the most dramatic escalations in violence in recent memory" adding there are "concerns the chaos could spread to the occupied West Bank and different countries in the Middle East."
-
- This round of bloodshed began with a surprise attack by Palestinian fighters from Gaza into Israel during the Jewish holiday of Simchat Torah. On Oct. 7, militants infiltrated Israel's border using paragliders, motorbikes and boats and fired thousands of rockets toward the country from Gaza.
-
NPR's Fatima Al-Kassab reported on the history of the Gaza Strip. Some key excerpts:
-
- The Gaza Strip is a 25-mile-long by 6-mile-wide enclave, bounded by the Mediterranean Sea to the west, Israel to the north and east and Egypt to the south.
- Gaza is one of two Palestinian territories. The other is the Israeli-occupied West Bank.
- The strip has been under a blockade by Israel and Egypt, restricting the movement of people and goods since Hamas seized control of the territory in 2007. Israel controls its airspace and shoreline, as well as what goods can cross Gaza's borders.
-
NPR's Fatma Tanis examined how we got here and what might come next in this longstanding conflict.
-
For anyone looking for guidance on how to talk to children about this war:
-
Here's the latest on a growing movement on college campuses nationwide, as students organize against Israel's war in Gaza.
As Editor-in-Chief of our newsroom, I’m extremely proud of the work our top-notch journalists are doing here at LAist. We’re doing more hard-hitting watchdog journalism than ever before — powerful reporting on the economy, elections, climate and the homelessness crisis that is making a difference in your lives. At the same time, it’s never been more difficult to maintain a paywall-free, independent news source that informs, inspires, and engages everyone.
Simply put, we cannot do this essential work without your help. Federal funding for public media has been clawed back by Congress and that means LAist has lost $3.4 million in federal funding over the next two years. So we’re asking for your help. LAist has been there for you and we’re asking you to be here for us.
We rely on donations from readers like you to stay independent, which keeps our nonprofit newsroom strong and accountable to you.
No matter where you stand on the political spectrum, press freedom is at the core of keeping our nation free and fair. And as the landscape of free press changes, LAist will remain a voice you know and trust, but the amount of reader support we receive will help determine how strong of a newsroom we are going forward to cover the important news from our community.
Please take action today to support your trusted source for local news with a donation that makes sense for your budget.
Thank you for your generous support and believing in independent news.

-
Isolated showers can still hit the L.A. area until Friday as remnants from the tropical storm move out.
-
First aspiring spectators must register online, then later in 2026 there will be a series of drawings.
-
It's thanks to Tropical Storm Mario, so also be ready for heat and humidity, and possibly thunder and lightning.
-
This measure on the Nov. 4, 2025, California ballot is part of a larger battle for control of the U.S. House of Representatives next year.
-
L.A. County investigators have launched a probe into allegations about Va Lecia Adams Kellum and people she hired at the L.A. Homeless Services Authority.
-
L.A. Mayor Karen Bass suspended a state law allowing duplexes, calling more housing unsafe. But in Altadena, L.A. County leaders say these projects could be key for rebuilding.