News from the AI & ML world

DeeperML - #malware

Sathwik Ram@seqrite.com //
Pakistan-linked SideCopy APT has escalated its cyber operations, employing new tactics to infiltrate crucial sectors. Seqrite Labs APT team uncovered these new tactics deployed since the last week of December 2024. The Advanced Persistent Threat (APT) group, previously focused on Indian government, defence, maritime sectors, and university students, is expanding its targeting scope.

The group has broadened its targets to include critical sectors such as railways, oil & gas, and external affairs ministries. One notable shift in their recent campaigns is the transition from using HTML Application (HTA) files to adopting Microsoft Installer (MSI) packages as a primary staging mechanism. This evolution is marked by increasingly sophisticated methods, such as reflective DLL loading and AES encryption via PowerShell.

Furthermore, SideCopy is actively repurposing open-source tools like XenoRAT and SparkRAT to enhance their penetration and exploitation capabilities. The group customizes these tools and employs a newly identified Golang-based malware dubbed CurlBack RAT, specifically designed to execute DLL side-loading attacks. Recent campaigns demonstrate an increased use of phishing emails masquerading as government officials to deliver malicious payloads, often using compromised official domains and fake domains mimicking e-governance services.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Virus Bulletin: The Seqrite Labs APT team has uncovered new tactics of the Pakistan-linked SideCopy APT. The group has expanded its targets to include critical sectors such as railways, oil & gas, and external affairs ministries and has shifted from using HTA files to MSI packages.
  • www.seqrite.com: Seqrite Labs APT team has uncovered new tactics of Pakistan-linked SideCopy APT deployed since the last week of December 2024.
  • www.seqrite.com: Seqrite Labs APT team has uncovered new tactics of Pakistan-linked SideCopy APT deployed since the last week of December 2024.
  • cyberpress.org: SideCopy APT Poses as Government Personnel to Distribute Open-Source XenoRAT Tool
  • gbhackers.com: SideCopy APT Hackers Impersonate Government Officials to Deploy Open-Source XenoRAT Tool
  • Cyber Security News: Pakistan-linked adversary group SideCopy has escalated its operations, employing new tactics to infiltrate crucial sectors.
  • gbhackers.com: SideCopy APT Hackers Impersonate Government Officials to Deploy Open-Source XenoRAT Tool
  • beSpacific: Article on the new tactics of the Pakistan-linked SideCopy APT.
Classification:
  • HashTags: #APT #SideCopy #Cybersecurity
  • Company: Seqrite
  • Target: Indian government, defense, maritime sectors, oil, gas
  • Attacker: SideCopy
  • Feature: TTPs
  • Type: APT
  • Severity: Medium
drewt@secureworldexpo.com (Drew@SecureWorld News //
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.

While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
Classification:
  • HashTags: #AI #malware #DeepSeek
  • Company: Tenable
  • Target: AI Systems
  • Attacker: DeepSeek R1 operators
  • Product: DeepSeek R1
  • Feature: malware generation
  • Malware: AI Generated Malware
  • Type: AI
  • Severity: Medium
karlo.zanki@reversinglabs.com (Karlo Zanki)@Blog (Main) //
Cybersecurity researchers have identified malicious machine learning (ML) models on Hugging Face, a popular platform for sharing and collaborating on ML projects. The models leverage a novel attack technique called "nullifAI," which uses "broken" pickle files to evade detection. This method abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models, which resemble proof-of-concept models, were initially not flagged as unsafe by Hugging Face's Picklescan security tool.

Researchers from ReversingLabs discovered two such models on Hugging Face containing malicious code. The nullifAI attack exploits differences in compression format with PyTorch and a security issue preventing proper scanning of Pickle files. The malicious payload in both cases was a platform-aware reverse shell that connects to a hard-coded IP address. The Hugging Face security team has since removed the malicious models and improved Picklescan's detection capabilities.

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Blog (Main): Researchers from ReversingLabs recently discovered two Hugging Face models containing malicious code. The nullifAI attack involves abusing Pickle file serialization.
  • gbhackers.com: Developers Beware! Malicious ML Models Found on Hugging Face Platform
  • The Hacker News: Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection
  • www.csoonline.com: Attackers hide malicious code in Hugging Face AI model Pickle files
  • ciso2ciso.com: Malicious ML Models on Hugging Face Exploit Novel Attack Technique – Source: www.infosecurity-magazine.com
  • cyberscoop.com: Hugging Face platform continues to be plagued by vulnerable ‘pickles’
  • Anonymous ???????? :af:: Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection.
  • ciso2ciso.com: Security Week - Malicious AI Models on Hugging Face Exploit Novel Attack Technique
  • Threats | CyberScoop: Cyberscoop - Hugging Face platform continues to be plagued by vulnerable ‘pickles’
  • gbhackers.com: Developers Beware! Malicious ML Models Found on Hugging Face Platform
  • Anonymous ???????? :af:: Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection.
  • Help Net Security: Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models.
  • www.helpnetsecurity.com: Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models.
  • www.scworld.com: Security news about hugging face with malicious ai models
  • : The abuse of Pickle, a popular Python module that many teams use for serializing and deserializing ML model data
  • Virus Bulletin: Virus Bulletin reports on researchers from ReversingLabs discovering two Hugging Face models containing malicious code.
Classification:
  • HashTags: #HuggingFace #MaliciousML #NullifAI
  • Company: Hugging Face
  • Target: Hugging Face Users
  • Product: Hugging Face
  • Feature: Pickle Serialization
  • Malware: Pickle
  • Type: AI
  • Severity: Major