Malware News 'Sleepy Pickle' Exploit Subtly Poisons ML Models

vtqhtr413

Level 27
Thread author
Well-known
Aug 17, 2017
1,609
Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to store Python objects in bytecode. ML models are often packaged and distributed in Pickle format, despite its longstanding, known risks.

As described in a new blog post from Trail of Bits, Pickle files allow some cover for attackers to inject malicious bytecode into ML programs. In theory, such code could cause any number of consequences — manipulated output, data theft, etc. — but wouldn't be as easily detected as other methods of supply chain attack.

"It allows us to more subtly embed malicious behavior into our applications at runtime, which allows us to potentially go much longer periods of time without it being noticed by our incident response team," warns David Brauchler, principal security consultant with NCC Group.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top