Nicole Kobie@itpro.com
//
The FBI has issued a warning regarding a major fraud campaign where cybercriminals are using AI-generated audio deepfakes and text messages to impersonate senior U.S. government officials. This scheme, which has been active since April 2025, targets current and former federal and state officials, along with their contacts, aiming to gain access to their personal accounts. The attackers are employing tactics known as smishing (SMS phishing) and vishing (voice phishing) to establish rapport before attempting to compromise accounts, potentially leading to the theft of sensitive information or funds.
The FBI advises that if individuals receive a message claiming to be from a senior U.S. official, they should not assume it is authentic. The agency suggests verifying the communication through official channels, such as calling back using the official number of the relevant department, rather than the number provided in the suspicious message. Additionally, recipients should be wary of unusual verbal tics or word choices that could indicate a deepfake in operation. This warning comes amidst a surge in social engineering attacks leveraging AI-based voice cloning. A recent report indicated a 442% increase in the use of AI voice cloning between the first and second halves of 2024. Experts caution that the stolen credentials or information obtained through these schemes could be used to further impersonate officials, spread disinformation, or commit financial fraud, highlighting the increasing sophistication and potential damage of AI-enhanced fraud. Recommended read:
References :
Nicole Kobie@itpro.com
//
The FBI has issued a warning about a rise in scams targeting U.S. government officials. Cybercriminals are using AI-generated voice clones and text messages to impersonate senior officials. This campaign, which started in April 2025, aims to trick current and former federal and state officials, as well as their contacts, into divulging sensitive information or granting unauthorized access to accounts. These tactics are referred to as "smishing" (malicious SMS messages) and "vishing" (fraudulent voice calls). The FBI is advising the public that if you receive a message claiming to be from a senior U.S. official, do not assume it is authentic.
The attackers use AI to create realistic voice deepfakes, making it difficult to distinguish between real and fake messages. They also leverage publicly available data to make their messages more convincing, exploiting human trust to infiltrate broader networks. The FBI has found that one method attackers use to gain access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. The use of AI-generated audio has increased sharply, as large language models have proliferated and improved their abilities to create lifelike audio. Once an account is compromised, it can be used in future attacks to target other government officials, their associates, and contacts by using trusted contact information they obtain. Stolen contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds. The FBI advises that the scammers are using software to generate phone numbers that are not attributed to specific phones, making them more difficult to trace. Individuals should be vigilant and follow standard security advice, such as not trusting unsolicited messages and verifying requests through official channels. Recommended read:
References :
cybernewswire@The Last Watchdog
//
References:
gbhackers.com
, hackread.com
,
Arsen, a leading cybersecurity firm specializing in social engineering defense, has announced the release of Conversational Phishing, an AI-powered feature embedded within its phishing simulation platform. This innovative tool introduces dynamic and adaptive phishing conversations, designed to train employees against increasingly sophisticated social engineering threats. The aim is to enhance social engineering resilience by creating realistic and personalized phishing simulations that mimic real-world attacker tactics, which often involve ongoing conversations to build trust and manipulate victims.
Arsen's Conversational Phishing addresses the limitations of traditional simulations, which rely on static email templates. This new feature dynamically generates and adapts phishing conversations in real-time, simulating back-and-forth interactions that mirror how attackers manipulate victims over time. According to CEO Thomas Le Coz, the goal is to evolve training methods to provide realistic conditions for testing and training, especially with the rise of AI-driven phishing attacks. The feature is fully integrated into Arsen’s phishing simulation module and has been accessible to all clients for the past six months. Recommended read:
References :
Mandvi@Cyber Security News
//
References:
Cyber Security News
, gbhackers.com
,
AI has become a powerful weapon for cybercriminals, enabling them to launch attacks with unprecedented speed and precision. A recent CrowdStrike report highlights the increasing sophistication and frequency of AI-driven cyberattacks. Cybercriminals are leveraging AI to automate attacks, allowing them to be launched with minimal human intervention, which leads to an increase of network penetrations and data theft.
AI's ability to analyze large datasets and identify patterns in user behavior allows cybercriminals to develop more effective methods of stealing credentials and committing fraud. For example, AI can predict common password patterns, making traditional authentication methods vulnerable. AI-powered tools can generate highly personalized phishing emails, making them almost indistinguishable from legitimate communications and greatly increasing the profitability of cyberattacks. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |