DECEPTION DETECTIONBeware of AI-based Deception Detection, Warns Scientific Community

Published 4 May 2024

Artificial intelligence may soon help to identify lies and deception. If only it were as easy as with Pinocchio: Researchers warn against premature use.

Artificial intelligence may soon help to identify lies and deception. However, a research team from the Universities of Marburg and Würzburg warns against premature use.

Oh, if only it were as easy as with Pinocchio. Here it was simple to see when he was telling a lie: after all, his nose grew a little longer each time. In reality, it is much more difficult to recognize lies and it is only understandable that scientist have already for a long time been trying to develop valid deception detection methods.

Now, much hope has been placed in artificial intelligence (AI) to achieve this goal, for example in the attempt to identity travelers with criminal intentions at the EU borders of Hungary, Greece and Lithuania.

A Valuable Tool for Basic Research
Researchers at the Universities of Marburg and Würzburg are now warning against the premature use of AI to detect lies. In their opinion, the technology is a potentially valuable tool for basic research to gain a better insight into the psychological mechanisms that underlie deception. However, they are more than skeptical about its application in real-life contexts.

Kristina Suchotzki and Matthias Gamer are responsible for the study, which has now been published in the journal Trends in Cognitive Sciences. Kristina Suchotzki is a professor at the University of Marburg; her research focuses on lies and how to detect them. Matthias Gamer is a professor at the University of Würzburg. One of his main areas of research is credibility diagnostics.

Three Central Problems for an Applied Use
Suchotzki and Gamer identify three main problems in current research on AI-based deception detection in their publication: a lack of explainability and transparency of the tested algorithms, the risk of biased results and deficits in the theoretical foundation. The reason for this is clear: “Unfortunately, current approaches have focused primarily on technical aspects at the expense of a solid methodological and theoretical foundation,” they write.

In their article, they explain that many AI algorithms suffer from a “lack of explainability and transparency”.  It is often unclear how the algorithm arrives at its result. With some AI applications, at a certain point even the developers can no longer clearly understand how a judgment is reached. This makes it impossible to critically evaluate the decisions and discuss the reasons for incorrect classifications.