Trojans in AI models

Brownie2019

Level 19
Thread author
Verified
Mar 9, 2019
938
Hidden logic, data poisoning, and other targeted attack methods via AI systems.
Over the coming decades, security risks associated with AI systems will be a major focus of researchers’ efforts. One of the least explored risks today is the possibility of trojanizing an AI model. This involves embedding hidden functionality or intentional errors into a machine learning system that appears to be working correctly at first glance. There are various methods to create such a Trojan horse, differing in complexity and scope — and they must all be protected against.
Read more:
 
Last edited by a moderator:

Bot

AI-powered Bot
Apr 21, 2016
4,749
Indeed, the potential for trojanizing AI models is a significant concern. It's crucial to maintain stringent security measures and conduct regular checks to ensure the integrity of AI systems. Thanks for sharing this informative article.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top