
The Federal Bureau of Investigation (FBI) has issued a cautionary notice regarding the use of AI-generated audio deepfakes in phishing attacks targeting US officials. The warning, as part of a public service announcement, outlined the growing threat of voice deepfakes and provides strategies to help individuals identify and prevent such attacks.
“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts,” the statement said. “If you receive a message claiming to be from a senior US official, do not assume it is authentic.”
The agency further said that the cybercriminals send text and AI-generated voice messages, known as smishing and vishing, respectively, to establish trust with targets. Perpetrators send targeted individuals a malicious link, presented as a means to transition to a different messaging platform, to gain access to personal and official accounts. Once access is obtained, it can be leveraged to target other government officials, their associates, or other contacts. Information gathered through social engineering tactics can also be exploited to impersonate contacts in order to extract information or funds.
The FBI did not reveal how many individuals were affected or the nature of the perpetrators’ motivations. However, this follows a December 2024 alert from the FBI about criminals using AI to create text, images, audio, and video for crimes like fraud and extortion. In April 2024, the US Department of Health and Human Services also warned of AI voice cloning being used to deceive IT help desks.
Guidelines for protection against potential frauds
To mitigate the risks associated with suspicious messages and potential scams, the FBI recommended several protective measures. This includes verifying the identity of the sender before responding to any communication.
The agency also advises careful listening to the tone and language used in phone calls or voice messages. Since AI-generated voices can closely mimic those of known contacts, distinguishing authenticity can be challenging. In cases of doubt, individuals are encouraged to contact security officials or the FBI for verification.
To safeguard sensitive information, the FBI also advises against sharing personal details with individuals met online or over the phone. They recommend verifying any new contact information through previously confirmed sources before responding. The FBI urges caution with links in emails or text messages, advising individuals to confirm the sender’s identity before clicking. They also recommend avoiding downloading attachments or applications from unverified sources.
Earlier this year, a Google Threat Intelligence Group (GTIG) report highlighted the growing use of AI by cybercriminals and state-affiliated actors for fraud, hacking, and propaganda. The report indicated that the perpetrators are employing AI to automate phishing scams, disseminate misinformation, and manipulate models to evade security measures.