News from the AI & ML world

DeeperML - #socialengineering

Nicole Kobie@itpro.com //
The FBI has issued a warning regarding a major fraud campaign where cybercriminals are using AI-generated audio deepfakes and text messages to impersonate senior U.S. government officials. This scheme, which has been active since April 2025, targets current and former federal and state officials, along with their contacts, aiming to gain access to their personal accounts. The attackers are employing tactics known as smishing (SMS phishing) and vishing (voice phishing) to establish rapport before attempting to compromise accounts, potentially leading to the theft of sensitive information or funds.

The FBI advises that if individuals receive a message claiming to be from a senior U.S. official, they should not assume it is authentic. The agency suggests verifying the communication through official channels, such as calling back using the official number of the relevant department, rather than the number provided in the suspicious message. Additionally, recipients should be wary of unusual verbal tics or word choices that could indicate a deepfake in operation.

This warning comes amidst a surge in social engineering attacks leveraging AI-based voice cloning. A recent report indicated a 442% increase in the use of AI voice cloning between the first and second halves of 2024. Experts caution that the stolen credentials or information obtained through these schemes could be used to further impersonate officials, spread disinformation, or commit financial fraud, highlighting the increasing sophistication and potential damage of AI-enhanced fraud.

Recommended read:
References :
  • Threats | CyberScoop: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • Talkback Resources: Deepfake voices of senior US officials used in scams: FBI [social]
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts. The FBI is urging the public to remain vigilant and take steps to protect themselves from these schemes. So let's understand what exactly is happening? The FBI has disclosed a coordinated campaign involving smishing and vishing—two cyber techniques used to deceive people into revealing sensitive information or giving unauthorized access to their personal accounts.
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • The Register - Software: The FBI has warned that fraudsters are impersonating "senior US officials" using deepfakes as part of a major fraud campaign.
  • www.cybersecuritydive.com: Hackers are increasingly using vishing and smishing for state-backed espionage campaigns and major ransomware attacks.
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • cyberscoop.com: FBI warns of fake texts, deepfake calls impersonating senior U.S. officials
  • thecyberexpress.com: The Federal Bureau of Investigation (FBI) has released a public service announcement to warn individuals about a growing involving text and voice messaging scams. Since April 2025, malicious actors have been impersonating senior U.S. government officials to target individuals, especially current or former senior federal and state officials, as well as their contacts.
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • hackread.com: FBI warns of AI Voice Scams Impersonating US Govt Officials
  • Security Affairs: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials Shields up US
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.

Nicole Kobie@itpro.com //
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.

The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio.

Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels.

Recommended read:
References :
  • Threats | CyberScoop: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • Talkback Resources: FBI warns of deepfake technology being used in a major fraud campaign targeting government officials, advising recipients to verify authenticity through official channels.
  • www.techradar.com: The FBI is warning about ongoing smishing and vishing attacks impersonating senior US officials.
  • securityaffairs.com: US Government officials targeted with texts and AI-generated deepfake voice messages impersonating senior U.S. officials
  • thecyberexpress.com: TheCyberExpress reports FBI Warns of AI Voice Scam
  • www.itpro.com: The FBI says hackers are using AI voice clones to impersonate US government officials
  • BleepingComputer: FBI: US officials targeted in voice deepfake attacks since April
  • The Register - Software: Scammers are deepfaking voices of senior US government officials, warns FBI
  • cyberinsider.com: Senior U.S. Officials Impersonated in AI-Powered Vishing Campaign
  • Tech Monitor: FBI warns of AI-generated audio deepfakes targeting US officials
  • The DefendOps Diaries: The Rising Threat of Voice Deepfake Attacks: Understanding and Mitigating the Risks
  • PCWorld: Fake AI voice scammers are now impersonating government officials
  • hackread.com: FBI Warns of AI Voice Scams Impersonating US Govt Officials
  • iHLS: The FBI has flagged a concerning wave of cyber activity involving AI-generated content used to impersonate high-ranking U.S. government officials.
  • cyberscoop.com: Texts or deepfaked audio messages impersonate high-level government officials and were sent to current or former senior federal or state government officials and their contacts, the bureau says.
  • arstechnica.com: FBI warns of ongoing that uses audio to government officials
  • Popular Science: That weird call or text from a senator is probably an AI scam

cybernewswire@The Last Watchdog //
References: gbhackers.com , hackread.com ,
Arsen, a leading cybersecurity firm specializing in social engineering defense, has announced the release of Conversational Phishing, an AI-powered feature embedded within its phishing simulation platform. This innovative tool introduces dynamic and adaptive phishing conversations, designed to train employees against increasingly sophisticated social engineering threats. The aim is to enhance social engineering resilience by creating realistic and personalized phishing simulations that mimic real-world attacker tactics, which often involve ongoing conversations to build trust and manipulate victims.

Arsen's Conversational Phishing addresses the limitations of traditional simulations, which rely on static email templates. This new feature dynamically generates and adapts phishing conversations in real-time, simulating back-and-forth interactions that mirror how attackers manipulate victims over time. According to CEO Thomas Le Coz, the goal is to evolve training methods to provide realistic conditions for testing and training, especially with the rise of AI-driven phishing attacks. The feature is fully integrated into Arsen’s phishing simulation module and has been accessible to all clients for the past six months.

Recommended read:
References :
  • gbhackers.com: Arsen Introduces AI-Powered Phishing Tests to Improve Social Engineering Resilience
  • hackread.com: Arsen Introduces AI-Powered Phishing Tests to Improve Social Engineering Resilience
  • The Last Watchdog: News alert: Arsen introduces new AI-based phishing tests to improve social engineering resilience

Mandvi@Cyber Security News //
AI has become a powerful weapon for cybercriminals, enabling them to launch attacks with unprecedented speed and precision. A recent CrowdStrike report highlights the increasing sophistication and frequency of AI-driven cyberattacks. Cybercriminals are leveraging AI to automate attacks, allowing them to be launched with minimal human intervention, which leads to an increase of network penetrations and data theft.

AI's ability to analyze large datasets and identify patterns in user behavior allows cybercriminals to develop more effective methods of stealing credentials and committing fraud. For example, AI can predict common password patterns, making traditional authentication methods vulnerable. AI-powered tools can generate highly personalized phishing emails, making them almost indistinguishable from legitimate communications and greatly increasing the profitability of cyberattacks.

Recommended read:
References :
  • Cyber Security News: AI Emerges as a Potent Tool for Cybercriminals to Accelerate Attacks
  • gbhackers.com: AI Becomes a Powerful Weapon for Cybercriminals to Launch Attacks at High Speed
  • www.cysecurity.news: CrowdStrike Report Reveals a Surge in AI-Driven Threats and Malware-Free Attacks