DEEPFAKESSounding the Alarm: Exposing Audio Deepfake

By Helen Goh

Published 1 June 2024

Audio deepfakes are becoming ubiquitous – blurring the line between fact and fiction – but researchers are working to develop methods to help the public navigate this new technological terrain.

Audio deepfakes are becoming ubiquitous – blurring the line between fact and fiction – but UF researchers are working to develop methods to help the public navigate this new technological terrain.

We’ve all heard about audio deepfakes and voice cloning or have even fallen prey to them – you receive a phone call from someone claiming to have been in an accident or arrested, who then hands the phone over to a supposed lawyer or even someone impersonating a law enforcement official seeking immediate payment.

“Deepfake voices are challenging a fundamental way we have come to understand the world and interact with the people in our lives,” said Patrick Traynor, Ph.D., a professor in UF’s Department of Computer & Information Science & Engineering, and the John H. and Mary Lou Dasburg Preeminent Chair in Engineering. “We rely on our senses, and now, deepfakes challenge the ways in which we interact with the world around us.”

Traynor warns that fake audio impersonating political leaders and other famous people has taken misinformation to new levels.

Humans have been mimicking each other’s voices since the dawn of language. While deepfake audio and voice cloning aren’t recent developments, the widespread accessibility of advanced technology has democratized their use. Creating convincing deepfakes is no longer limited by skill or expertise. Thanks to advancements in hardware and increasingly sophisticated machine-learning algorithms, individuals can now fabricate virtually any content; merely a few seconds from a publicly available voice recording, whether from YouTube or a simple voicemail, is all that is needed.

This poses a real threat to various facets of people’s lives — from online banking to air traffic control, the election process, and national defense — where the authenticity of the voice is critical. Deepfake audio undermines the very pillars of modern society, casting a shadow over trust and security.

Traynor also warns of the dual threat posed by the widespread circulation of deepfake audio samples, emphasizing that the issue requires not only the development of tools to detect and expose deepfakes but also the creation of mechanisms to authenticate genuine content.

“We face a twofold challenge,” he said. “Not only do we have to unmask deepfakes, but we must also find ways to prove certain things are genuinely real.”