Sathwik Ram@seqrite.com
//
Pakistan-linked SideCopy APT has escalated its cyber operations, employing new tactics to infiltrate crucial sectors. Seqrite Labs APT team uncovered these new tactics deployed since the last week of December 2024. The Advanced Persistent Threat (APT) group, previously focused on Indian government, defence, maritime sectors, and university students, is expanding its targeting scope.
The group has broadened its targets to include critical sectors such as railways, oil & gas, and external affairs ministries. One notable shift in their recent campaigns is the transition from using HTML Application (HTA) files to adopting Microsoft Installer (MSI) packages as a primary staging mechanism. This evolution is marked by increasingly sophisticated methods, such as reflective DLL loading and AES encryption via PowerShell. Furthermore, SideCopy is actively repurposing open-source tools like XenoRAT and SparkRAT to enhance their penetration and exploitation capabilities. The group customizes these tools and employs a newly identified Golang-based malware dubbed CurlBack RAT, specifically designed to execute DLL side-loading attacks. Recent campaigns demonstrate an increased use of phishing emails masquerading as government officials to deliver malicious payloads, often using compromised official domains and fake domains mimicking e-governance services. References :
Classification:
drewt@secureworldexpo.com (Drew@SecureWorld News
//
DeepSeek R1, an open-source AI model, has been shown to generate rudimentary malware, including keyloggers and ransomware. Researchers at Tenable demonstrated that while the AI model initially refuses malicious requests, these safeguards can be bypassed with carefully crafted prompts. This capability signals an urgent need for security teams to adapt their defenses against AI-generated threats.
While DeepSeek R1 may not autonomously launch sophisticated cyberattacks yet, it can produce semi-functional code that knowledgeable attackers could refine into working exploits. Cybersecurity experts emphasize the dual-use nature of generative AI, highlighting the need for organizations to implement strategies such as behavioral detection over static signatures to mitigate risks associated with AI-powered cyber threats. Cybercrime Magazine has also released an episode on CrowdStrike’s new Adversary Universe Podcast, discussing DeepSeek and the risks associated with foreign large language models. References :
Classification:
karlo.zanki@reversinglabs.com (Karlo Zanki)@Blog (Main)
//
Cybersecurity researchers have identified malicious machine learning (ML) models on Hugging Face, a popular platform for sharing and collaborating on ML projects. The models leverage a novel attack technique called "nullifAI," which uses "broken" pickle files to evade detection. This method abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models, which resemble proof-of-concept models, were initially not flagged as unsafe by Hugging Face's Picklescan security tool.
Researchers from ReversingLabs discovered two such models on Hugging Face containing malicious code. The nullifAI attack exploits differences in compression format with PyTorch and a security issue preventing proper scanning of Pickle files. The malicious payload in both cases was a platform-aware reverse shell that connects to a hard-coded IP address. The Hugging Face security team has since removed the malicious models and improved Picklescan's detection capabilities. References :
Classification:
|
BenchmarksBlogsResearch Tools |