Generative Artificial Intelligence (GAI) and the Israel-Hamas War

Some channels and message boards are dedicated solely to crowdsourcing memes, encouraging “memetic warfare” and advising users to create “propaganda for fun.” Posts provide instructions on how to use GAI tools to produce radicalizing images based on text prompts. Many of these images reference the Israel-Hamas war — mostly glorifying Hamas, demonizing Israel and spreading already-debunked false narratives. 

Several memes depict the moment Hamas paragliders crossed into Israel on October 7. One Telegram post includes the caption, “Getting the view before tearing up the Torah.”

Another GAI image on 4chan and Telegram shows paragliders over a burning building. At a distance, the meme reveals an image of Adolf Hitler.

An image shared on an X account dedicated to GAI memes shows a visibly Jewish man standing next to a cartoon bomb in a hospital. This is one of many recent GAI memes depicting the explosion at Al-Ahli hospital in Gaza on October 17, which many initially blamed on Israeli airstrikes. This claim has been widely debunked via multiple analyses from independent experts, the media, the U.S. government and others, which point to the explosion being caused by a rocket misfire from within Gaza.

A meme shared in an AI “art” Telegram channel shows an approaching military tank with a large nose, along with the caption, “BREAKING: Israeli tanks have been spotted moving to Gaza.” The image is clearly drawing inspiration from a classic hateful trope about Jewish people having disproportionately large noses.

GAI Encourages Doubt of Actual Documented Violence 
Perhaps the most worrying side effect of GAI is that its increasing popularity on social media has sowed doubt over real images of graphic or traumatic content. This phenomenon, often referred to as the “liar’s dividend,” has directly impacted the discourse surrounding the war in Israel and Palestine, bringing the authenticity of reported violence into question. 

On X, 4chan and Telegram, many claimed that an image of a burnt body believed to be an Israeli victim was run through “AI-detector tools” and deemed fake. According to experts, including our own analysts, such tools are unreliable and routinely produce inconsistent results.

Trolls on 4chan promoted the theory that the original photo showed a puppy from an animal rescue wrapped in a towel, and that propagandists used AI tools to edit the puppy out and put a burnt body in its place. However, the puppy photo in question has not been traced to any legitimate source online beyond the 4chan edit. ADL analysts have independently verified images of other similarly burned victims in Israel. 

Photos from Israeli homes showing the bloody aftermath of Hamas’s October 7 attack have also been deemed fake by anti-Israel users online. One image on X alleges that the blood in the photo has been “staged” and that the knife looks like it was generated by AI.

On X, a spokesperson for the Israeli government shared photos believed to show human remains at Kibbutz Be’eri, including teeth. Users on both X and Reddit declared without evidence that the images were “fake AI,” alleging that the photos were a product of “Israeli propaganda” meant to fool the masses.

The article is pu blished courtesy of the Anti-Defamation League (ADL).