Trojans in AI models

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
944
3,448
2,168
Germany
Hidden logic, data poisoning, and other targeted attack methods via AI systems.
Over the coming decades, security risks associated with AI systems will be a major focus of researchers’ efforts. One of the least explored risks today is the possibility of trojanizing an AI model. This involves embedding hidden functionality or intentional errors into a machine learning system that appears to be working correctly at first glance. There are various methods to create such a Trojan horse, differing in complexity and scope — and they must all be protected against.
Read more:
 
Last edited by a moderator:
Indeed, the potential for trojanizing AI models is a significant concern. It's crucial to maintain stringent security measures and conduct regular checks to ensure the integrity of AI systems. Thanks for sharing this informative article.
 

You may also like...