Some observers on social media quickly dismissed it as an “AI-generated fake” — created using artificial intelligence tools that can produce photorealistic images with a few clicks.
Elevate Your Tech Prowess with High-Value Skill Courses
Offering College | Course | Website |
---|---|---|
Northwestern University | Kellogg Post Graduate Certificate in Product Management | Visit |
Indian School of Business | ISB Digital Transformation | Visit |
Indian School of Business | ISB Professional Certificate in Product Management | Visit |
Several AI specialists have since concluded that the technology was probably not involved. By then, however, the doubts about its veracity were already widespread.
Since Hamas’ terror attack Oct. 7, disinformation watchdogs have feared that fakes created by AI tools, including the realistic renderings known as deepfakes, would confuse the public and bolster propaganda efforts.
So far, they have been correct in their prediction that the technology would loom large over the war — but not exactly for the reason they thought.
Disinformation researchers have found relatively few AI fakes, and even fewer that are convincing. Yet the mere possibility that AI content could be circulating is leading people to dismiss genuine images, video and audio as inauthentic.
Discover the stories of your interest
On forums and social media platforms like X (formerly known as Twitter), Truth Social, Telegram and Reddit, people have accused political figures, media outlets and other users of brazenly trying to manipulate public opinion by creating AI content, even when the content is almost certainly genuine. “Even by the fog of war standards that we are used to, this conflict is particularly messy,” said Hany Farid, a computer science professor at the University of California, Berkeley and an expert in digital forensics, AI and misinformation. “The specter of deepfakes is much, much more significant now. It doesn’t take tens of thousands; it just takes a few, and then you poison the well, and everything becomes suspect.”
AI has improved greatly over the past year, allowing nearly anyone to create a persuasive fake by entering text into popular AI generators that produce images, video or audio — or by using more sophisticated tools. When a deepfake video of President Volodymyr Zelenskyy of Ukraine was released in the spring of 2022, it was widely derided as too crude to be real; a similar faked video of President Vladimir Putin of Russia was convincing enough for several Russian radio and television networks to air it this June.
“What happens when literally everything you see that’s digital could be synthetic?” Bill Marcellino, a senior behavioral scientist and AI expert at the Rand Corp. research group, said in a news conference last week. “That certainly sounds like a watershed change in how we trust or don’t trust information.”
Amid highly emotional discussions about Gaza, many happening on social media platforms that have struggled to shield users against graphic and inaccurate content, trust continues to fray. And now experts say that malicious agents are taking advantage of AI’s availability to dismiss authentic content as fake — a concept known as the liar’s dividend.
Their misdirection during the war has been bolstered partly by the presence of some content that was created artificially.
A post on X with 1.8 million views claimed to show soccer fans in a stadium in Madrid holding an enormous Palestinian flag; users noted that the distorted bodies in the image were a telltale sign of AI generation. A Hamas-linked account on X shared an image that was meant to show a tent encampment for displaced Israelis but pictured a flag with two blue stars instead of the single star featured on the actual Israeli flag. The post was later removed. Users on Truth Social and a Hamas-linked Telegram channel shared pictures of Prime Minister Benjamin Netanyahu of Israel synthetically rendered to appear covered in blood.
Far more attention was spent on suspected footage that bore no signs of AI tampering, such as video of the director of a bombed hospital in Gaza giving a news conference — called “AI-generated” by some — which was filmed from different vantage points by multiple sources.
Other examples have been harder to categorize. The Israeli military released a recording of what it described as a wiretapped conversation between two Hamas members but what some listeners said was spoofed audio (The New York Times, the BBC and CNN reported that they have yet to verify the conversation).
In an attempt to discern truth from AI, some social media users turned to detection tools, which claim to spot digital manipulation but have proved to be far from reliable. A test by the Times found that image detectors had a spotty track record, sometimes misdiagnosing pictures that were obvious AI creations or labeling real photos as inauthentic.
In the first few days of the war, Netanyahu shared a series of images on X, claiming they were “horrifying photos of babies murdered and burned” by Hamas. When conservative commentator Ben Shapiro amplified one of the images on X, he was repeatedly accused of spreading AI-generated content.
One post, which garnered more than 21 million views before it was taken down, claimed to have proof that the image of the baby was fake: a screenshot of AI or Not, a detection tool, identifying the image as “generated by AI.” The company later corrected that finding on X, saying that its result was “inconclusive” because the image was compressed and altered to obscure identifying details; the company also said it refined its detector.
“We realized every technology that’s been built has, at one point, been used for evil,” said Anatoly Kvitnitsky, the CEO of AI or Not, which is based in the San Francisco Bay Area and has six employees. “We came to the conclusion that we are trying to do good; we’re going to keep the service active and do our best to make sure that we are purveyors of the truth. But we did think about that — are we causing more confusion, more chaos?”
AI or Not is working to show users which parts of an image are suspected of being AI-generated, Kvitnitsky said.
Available AI detection services could potentially be helpful as part of a larger suite of tools but are dangerous when treated like the final word on content authenticity, said Henry Ajder, an expert on manipulated and synthetic media.
Deepfake detection tools, he said, “provide a false solution to a much more complex and difficult-to-solve problem.”
Rather than relying on detection services, initiatives like the Coalition for Content Provenance and Authenticity and companies like Google are exploring tactics that would identify the source and history of media files. The solutions are far from perfect — two groups of researchers recently found that existing watermarking technology is easy to remove or evade — but proponents say they could help restore some confidence in the quality of content.
“Proving what’s fake is going to be a pointless endeavor, and we’re just going to boil the ocean trying to do it,” said Chester Wisniewski, an executive at the cybersecurity firm Sophos. “It’s never going to work, and we need to just double down on how we can start validating what’s real.”
For now, social media users looking to deceive the public are relying far less on photorealistic AI images than on old footage from previous conflicts or disasters, which they falsely portray as the current situation in Gaza, according to Alex Mahadevan, the director of the Poynter media literacy program MediaWise.
“People will believe anything that confirms their beliefs or makes them emotional,” he said. “It doesn’t matter how good it is, or how novel it looks, or anything like that.”