Fact Check: AI Fakes in Israel's War Against Hamas

There are some examples, but it’s not much if we compare it to the amount of disinformation that is actually old images and old videos that are now reshared in a misleading way,” he adds.

However, this does not mean the technology isn’t a factor. Farid explains that he does not consider the number of AI fakes to be the relevant factor.

You can have two images that go super viral, and hundreds of millions of people see it. It can have a huge impact,” he says. 

So it doesn’t have to be a volume game, but I think the real issue we are seeing is just the pollution of the information ecosystem.”

3) What narratives do the AI fakes serve in the Israel-Hamas war?
The AI images circulating on social media networks can usually trigger strong emotions. 

Canetta identifies two main categories. One refers to images that focus on the suffering of the civilian population and arouse sympathy for the people shown. The other is AI fakes that exaggerate support for either Israel, Hamas or the Palestinians and appeal to patriotic feelings.

The first category includes, for example, the picture above of a father with his five children in front of a pile of rubble. It was shared many times on X (formerly Twitter) and Instagram and seen hundreds of thousands of times in connection with Israel’s bombardment of the Gaza Strip. 

In the meantime, the image has been marked with a community notice, at least on X, that it is fake. It can be recognized as such by various errors and inconsistencies that are typical for AI images.

The man’s right shoulder, for instance, is disproportionately high. The two limbs emerging from underneath are also strange — as if they were growing from his sweater. 

Also striking is how the hands of the two boys who have wrapped their arms around their father’s neck merge. And there are too many or too few fingers and toes in several of the hands and feet in the picture.

Similar anomalies can also be seen in the following AI fake that went viral on X, which purportedly shows a Palestinian family eating together in the rubble, evoking sympathy for Palestinian civilians.

The picture below, which shows soldiers waving Israeli flags as they walk through a settlement full of bombed-out houses, falls into the second category, which is designed to spark feelings of patriotism.

The accounts that share the image on Instagram and X appear to be primarily pro-Israeli and welcome the events depicted. DW also found the picture as an article image in a Bulgarian online newspaper, which did not recognize or label it as AI-generated.

What looks fake here is the way the Israeli flags are waving. The street in the middle also appears too clean, while the rubble looks very uniform. The destroyed buildings also look like twins, standing at pretty regular intervals. 

All in all, the visual impression is too “clean” to appear realistic. This kind of flawlessness, which makes images look like they have been painted, is also typical for AI.

4) Where do such AI images come from?
Private accounts on social media distribute most images created with the help of artificial intelligence. They are posted by both authentic and obviously fake profiles. 

However, AI-generated images can also be used in journalistic products. Whether and in which cases this can be useful or sensible is currently being discussed at many media companies.

The software company Adobe caused a stir when it added AI-generated images to its range of stock photos at the end of 2022. These are labeled accordingly in the database.

Adobe now also offers AI images of the Middle East war for sale — for example of explosions, people protesting or clouds of smoke behind the Al-Aqsa Mosque. 

Critics find this highly questionable, and some online sites and media have continued to use the images without labeling them as AI-generated. The image above, for example, appears on the site “Newsbreak” without any indication that it was computer-generated. DW found this out with the help of a reverse image search.

Even the European Parliamentary Research Service, the European Parliament’s scientific service, illustrated an online text on the Middle East conflict with one of the fakes from the Adobe database — again without labeling it as an AI-generated image.

Canetta from the European Digital Media Observatory is appealing to journalists and media professionals to be very careful when using AI images, advising against their use, especially when it comes to real events such as the war in Gaza.

The situation is different when the goal is to illustrate abstract topics such as future technologies.

5) How much damage do AI images cause?
The knowledge that AI content is circulating makes users feel insecure about everything they encounter online. 

UC Berkeley researcher Farid explains: “If we enter this world where it is possible to manipulate images, audio and video, everything is in question. So you’re seeing real things being claimed as fake.”

That is precisely what happened in the following case: an image allegedly showing the charred corpse of an Israeli baby was shared on X by Israel’s Prime Minister Benjamin Netanyahu and the conservative US commentator Ben Shapiro, among others. 

The controversial anti-Israeli influencer Jackson Hinkle then claimed that the image had been created using artificial intelligence. 

As alleged proof, Hinkle attached a screenshot of the AI detector “AI or not” to his post, which classified the image as AI-generated. 

Hinkle’s claim on X was viewed more than 20 million times and led to heated discussions on the platform. 

In the end, many stated that the image was, in all likelihood, genuine and that Hinkle was, therefore, wrong. Farid also told DW that he could not find any discrepancies in the picture that would indicate an AI fake.

How can that be, you might ask? AI detectors, which can be used to check whether an image or text is possibly AI-generated, are still very error-prone. Depending on the image checked, they get it right or wrong and often only make decisions as likely probabilities — not with 100% certainty.

Therefore, they are at the most suitable as an additional tool for checking AI fakes, but definitely not as the only tool.

DW’s fact-checking team could also not detect any clear signs of AI-based manipulation in the image, presumably showing a baby’s corpse. 

Curiously, “AI or not” did not classify the image as AI-generated when we tried it out ourselves — and pointed out that the image quality was poor.

Another AI detector (Hive moderation) also concluded that the image was genuine.

Ines Eisele is a member of DW’s fact-checking team, author for text and video, and Channel Manager for DW’s German website.Uta Steinwehr is a founding member of DW’s fact-checking team.This article is published courtesy of Deutsche Welle (DW).Mina Kirkowa contributed to this report.