- Mar 9, 2019
- 938
Read more:Hidden logic, data poisoning, and other targeted attack methods via AI systems.
Over the coming decades, security risks associated with AI systems will be a major focus of researchers’ efforts. One of the least explored risks today is the possibility of trojanizing an AI model. This involves embedding hidden functionality or intentional errors into a machine learning system that appears to be working correctly at first glance. There are various methods to create such a Trojan horse, differing in complexity and scope — and they must all be protected against.

Undeclared functionality in machine learning systems
Hidden logic, data poisoning, and other targeted attack methods using AI systems.

Last edited by a moderator: