An AI Lie Detector for Today’s Deepfake World

“I don’t know if he really, really loves Nicole Kidman or he really, really hates her. But what I do know is that when he’s asked this question something explodes inside him, even though he’s playing a poker face and he’s a movie actor and he knows how to act in front of the camera.”

Despite the actor’s calm exterior, the software picks up super-high levels of stress, emotion and cognition, from the involuntary signals he gives off.

Former US President Bill Clinton managed to score low, at first, when he declares on camera in 1998: “I did not have sexual relations with that woman.”

He was being truthful, to some extent. According to a very particular interpretation of “sexual relations,” he’d convinced himself he wasn’t lying about what happened between him and Monica Lewinsky.

Cohen says: “We see from the video that he lied, but he believed his own lie. He almost convinced everyone, but at the end of the video we see an explosion on his cognition and emotion. He’s trying to believe in his lie, but you can’t manipulate your internal biology.”

The Illuminator software can provide useful insights from as little as 20 seconds of reasonable-quality video. 

Deepfake Detector
The startup has also just introduced a new tool that uses the illuminator software to combat the growing threat of deepfakes in electoral processes, a timely launch given the US elections among many others coming in the next months.

The detector analyzes videos at scale, categorizing them as deepfake, authentic, or suspect for further examination. It can process unlimited volumes of content, from single videos to millions, making it an invaluable asset for election integrity.

“With deepfakes increasing by 245% year-on-year in 2024, the potential to impact major events like national elections is significant,” said Dov Donin, the company’s founder and CEO. “Our system is already used by several governments globally to protect democratic processes from disinformation campaigns.”

It is definitely a serious problem and fake videos can easily shape attitudes and impact voting behavior. In January, a robocall impersonating President Biden’s voice advised New Hampshire voters to abstain from voting in the Democratic primary, a tactic aimed at manipulating voter turnout.

Also in the US, deepfake manipulation of political figures has become widespread, with synthetic images and videos being used to sway voter sentiment. For example, a fake image of Trump embracing black supporters was circulated to bolster his popularity.

Internationally, China has been reported to use AI-driven content to influence elections, as seen in Taiwan’s early-year elections and India’s pre-election period, where a massive amount of AI-generated voice calls imitated public figures. T

“Today’s technologies enable the creation of highly realistic fake videos accessible to anyone online, amplifying their impact through social media – the most influential platform in democracies with its unparalleled reach,” said Donin.

“This situation underscores the urgent need for reliable and ethical fake detection technologies. Such tools are crucial for institutions and media to help global citizens distinguish between reality and manipulated content, ensuring a safer and more informed navigation of the modern world.”

The technology is also being used by US insurance companies to speed up claim settlements for people prepared to submit to a truth test.

They provide details of what was stolen from their home on a video link, then declare that they haven’t “topped up” their claim. Illuminator is watching, very closely, for signs of cognitive dissonance – otherwise known as lying or, in this case, “light fraud.”

It’s a smart version of the old polygraph lie detector, which relies on physical indicators — breathing rate, perspiration, blood pressure, and pulse rate. The polygraph, now a century old, can only be used for yes/no questions and has been dismissed by skeptics as junk science.

More recent attempts at a truth test have involved bombarding people with hundreds of very similar questions in rapid succession – based on the logic that they’ll give a true answer if they don’t have to time to think of a false one.

There are other companies that measure some aspects of involuntary human behavior, such as voice analysis or eye movements.

“As far as I know, we are the only company that has managed to gather several human factors working in parallel and then analyze it into a dashboard like ours,” Cohen tells ISRAEL21c.

PTSD and Deepfakes 
Revealense was founded in 2021 in Petah Tikva, central Israel, and has raised over $4 million in funding.

“We’re also using our technology for identifying the possibility for PTSD [post-traumatic stress disorder] among soldiers returning from the war,” says Cohen. 

“We are already working with the Israeli military on a technology that can assess in a matter of minutes the chances of a soldier developing post-trauma.”

The future will bring new challenges in terms of deepfakes, he says, such as the convincing video that surface a couple of years ago that appeared to show actor Morgan Freeman declare: “I am not Morgan Freeman. And what you see is not real.”

The technology already exists to produce video that looks and sounds exactly like a president, an actor, or even your own family member, with potentially devastating consequences.

Scammers can create an ultra-sophisticated deepfake of your son, for example, in a video call saying: “Hi Dad, I’m short of cash and I’m stuck without fuel. Can you send me some money?”

But the deepfake won’t have the telltale signs of truth that Revealense’s AI is trained to detect and analyze.

“To protect humans from the danger of AI, specifically from deepfakes, we believe we can provide everyone with what we call a ‘mental ID’ that is unique, like your fingerprint to protect them from AI,” Cohen explains.

By the way, if you’re wondering how I fared in my truth test (and therefore whether you can believe what I’ve written), I’m sorry to tell you that the results are confidential.

John Jeffay is journalist. This article is published courtesy of Israel21c.