Artificial intelligence (AI) is now omnipresent. Its capacities impress by their effectiveness. However, in the hands of malicious individuals, it can quickly turn into a powerful nuisance tool. The FBI recently looked at a disturbing phenomenon: crooks use AI to create vocal and visual deepfakes. These false convincing people are used to manipulate and steal sensitive data. The American agency alerts these new threats, which affect public figures and key actors in the crypto.

In short
- The FBI denounces the use of Deepfakes to deceive senior American civil servants and their networks.
- Synthetic and fraudulent voices facilitate the theft of sensitive and identifiers.
- Crypto leaders, including Sandeep Nailwal from Polygon, victims of scams via videoconferences fucked by IA.
- The FBI recommends maximum vigilance, verification of interlocutors and securing by strong authentication.
IA and Deepfakes: the new wave of targeted scams according to the FBI
Since April 2025, the FBI has observed a Ascent attacks combining Smishing, Vishing and Deepfakes. These techniques are based on fraudulent text messages and synthetic voices generated by AI. The crooks usurp the identity of senior US officials to fond of their victims. This strategy makes it possible to establish a relationship of trust before trying to steal sensitive data. The FBI warns:
If you receive a pretending message from a senior American civil servant, do not assume that it is authentic.
The stake exceeds the simple data theft. Once the accounts have compromised, hackers use contacts recovered to target other victims. This chain spread threatens the integrity of whole government networks. In addition, the crooks often lead the victims to controlled platforms where malicious ties of identifiers and passwords.
The FBI recommends increased vigilance, especially in the face of unknown ties or unusual requests.
Emblematic figures of crypto in sight
Deepfakes do not affect government spheres. Several leaders in crypto industry revealed to be targeted by these attacks. Sandeep Nailwalco -founder of Polygon, recently shared its alarming experience. Impostors hacked the Telegram account of an employee and organized fake Zoom videoconferences.
These meetings included deepfakes from Nailwal and other key members. The attackers then asked for the installation of malware, endangering the devices of the participants.
This sophisticated method Bases on AI to falsify voice and images, making scams difficult to detect. Nailwal warns:
Never install anything on your computer during an interaction initiated by another person.
This warning reflects the urgency ofadopt prudent behavior Faced with these new forms of fraud. Other personalities, such as DOVEY WANconfirm the disturbing progression of these Deepfakes for criminal use.
Protect your data in the AI era: FBI advice to avoid traps
Faced with these threats, the FBI provides practical recommendations To limit risks. Here are the key tips to remember:
- Always check the identity of the sender or caller via an independent channel;
- Carefully examine the addresses, numbers, URLs, and spelling mistakes which often betray usurpation;
- Pay attention to the visual details: distorted images, non -natural movements or offbeat voice;
- Never click on a link received from an unknown or not verified contact;
- Activate multi-factory authentication on all your sensitive accounts and never share the codes.
These measures may seem basic, but they constitute a first line of defense against the exploitation of AI by criminals. In addition, it is advisable to reserve a dedicated device solely for the management of Crypto portfolios. This precaution limits the risks linked to the accidental installation of malicious software.
The FBI recalls that IA technology, if it is powerfulrequires constant vigilance. Deepfakes, skillfully mixing images and voice, gain realism and complexity. It thus becomes essential to educate and make professionals and the general public aware of these dangers.
Deepfakes have already caused famous victims, Elon Musk in mind, illustrating the extent of the global phenomenon. AI, double -edged, upsets the rules of digital confidence. In this context, innovative solutions emerge. Among them, “Verify”, a tool developed jointly by Fox and Polygon. This blockchain -based system aims to authenticate content and communications, offering an effective bulwark against AI manipulation. It stands out as a promising track to preserve integrity into a digital world in full mutation.
Maximize your Cointribne experience with our 'Read to Earn' program! For each article you read, earn points and access exclusive rewards. Sign up now and start accumulating advantages.
