Malware News 'Sleepy Pickle' Exploit Subtly Poisons ML Models

vtqhtr413

Level 27
Thread author
Verified
Top Poster
Well-known
Aug 17, 2017
1,610
Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to store Python objects in bytecode. ML models are often packaged and distributed in Pickle format, despite its longstanding, known risks.

As described in a new blog post from Trail of Bits, Pickle files allow some cover for attackers to inject malicious bytecode into ML programs. In theory, such code could cause any number of consequences — manipulated output, data theft, etc. — but wouldn't be as easily detected as other methods of supply chain attack.

"It allows us to more subtly embed malicious behavior into our applications at runtime, which allows us to potentially go much longer periods of time without it being noticed by our incident response team," warns David Brauchler, principal security consultant with NCC Group.
 

[correlate]

Level 18
Verified
Top Poster
Well-known
May 4, 2019
831

Malicious ML models discovered on Hugging Face platform​

Software development teams working on machine learning take note: RL threat researchers have identified nullifAI, a novel attack technique used on Hugging Face.​

In the last few months, artificial intelligence (AI) is popping up in all kinds of headlines, ranging from technical software developer websites to the Sunday comics. There’s no secret why. Given the recent explosion in the capabilities of large language models (LLMs) and generative AI, organizations are trying to find ways to incorporate AI technologies into their business models — and to make use of its capabilities.

While most non-technical people think of OpenAI’s ChatGPT when AI is mentioned (or maybe its Chinese competitor DeepSeek), developers and others familiar with machine learning (ML) models and the technology that supports AI will likely think of Hugging Face, a platform dedicated to collaboration and sharing of ML projects. As described in its organization card on the Hugging Face platform, the company is “on a mission to democratize good machine learning.”
That democratization is happening. But with AI’s growing popularity and use, platforms like Hugging Face are now being targeted by threat actors, who are seeking new, hard-to-detect ways of inserting and distributing malicious software to unsuspecting hosts.
Malicious ML models discovered on Hugging Face platform
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top