IBM Security researchers recently revealed an alarming method of manipulating live conversations, exploiting artificial intelligence (AI) in insidious ways. In a shocking experiment, AI was used to intercept and alter a phone conversation between two people without them realizing anything. This attack, called “audio-jacking”, exploits advances in generative AI and deepfake audio technology.
AI thwarts live conversations: A frighteningly simple attack
Is AI, which generates deepfakes, really threatening as the UN indicates? In a shocking experiment revealed by IBM Security researchers, AI was used to intercept and alter a telephone conversation between two people without them realizing anything. This attack, called “audio-jacking”, leverages advances in generative AI and deepfake audio technology.
The AI is programmed to detect specific words or phrases and replace the authentic voice with a fake one when asked to disclose sensitive information such as bank details or poorly secured Bitcoin (BTC) wallet. The result is silent manipulation of the details of a live financial conversation, without arousing the participants’ suspicions.
“ We recently published research showing how adversaries could hypnotize LLMs to put them at the service of nefarious objectives, simply by using guide messages in English. But to continue exploring this new attack surface, we are not stopping there. In this blog we present a successful attempt to intercept and ‘hijack’ live conversationand using LLM to understand the conversation in order to manipulate audio output without the speakers’ knowledge for malicious purposes », we can read on the IBM Security website.
This revelation raises major concerns about the misuse of this technology by malicious individuals, highlighting the need to strengthen the security of online communications.
Generative AI: a new weapon in the hackers’ arsenal
AI audio-jacking, a real threat during live conversations? This is what IBM Security researchers confirm.
The use of generative AI to intercept and alter audio chats is described as “ surprisingly and frighteningly easy “. Hackers can thus bypass traditional security features.
In addition to the risk of fraudulent fund transfers, this technology opens the door to invisible censorship, modifying in real time the content of political broadcasts and speeches.
Generative AI opens new avenues for cybercriminals, combining different techniques for sophisticated attacks.
To illustrate this threat, an experiment was conducted demonstrating the ability to change the context of a live conversation. This subtle method makes detection difficult, transforming participants into puppets manipulated from a distance.
Maximize your Tremplin.io experience with our ‘Read to Earn’ program! For every article you read, earn points and access exclusive rewards. Sign up now and start earning benefits.
