- Oct 9, 2016
- 6,141
DEF CON Machine-learning tools can create custom malware that defeats antivirus software.
In a keynote demonstration at the DEF CON hacking convention Hyrum Anderson, technical director of data science at security shop Endgame, showed off research that his company had done in adapting Elon Musk’s OpenAI framework to the task of creating malware that security engines can’t spot.
The system basically learns how to tweak malicious binaries so that they can slip past antivirus tools and continue to work once unpacked and executed. Changing small sequences of bytes can fool AV engines, even ones that are also powered by artificial intelligence, he said. Anderson cited research by Google and others to show how changing just a few pixels in an image can cause classification software to mistake a bus for an ostrich.
“All machine learning models have blind spots,” he said. “Depending on how much knowledge a hacker has they can be convenient to exploit.”
So the team built a fairly simple mechanism to develop weaponised code by making very small changes to malware and firing these variants at an antivirus file scanner. By monitoring the response from the engine they were able to make lots of tiny tweaks that proved very effective at crafting software nasties that could evade security sensors.
The malware-tweaking machine-learning software was trained over 15 hours and 100,000 iterations, and then lobbed some samples at an antivirus classifier. The attacking code was able to get 16 per cent of its customized samples past the security system’s defenses, we're told.
This software-generation software will be online at the firm’s Github page and Anderson encouraged people to give it a try. No doubt security firms will also be taking a long look at how this affects their products in the future. ®
Read more; AI quickly cooks malware that AV software can't spot
In a keynote demonstration at the DEF CON hacking convention Hyrum Anderson, technical director of data science at security shop Endgame, showed off research that his company had done in adapting Elon Musk’s OpenAI framework to the task of creating malware that security engines can’t spot.
The system basically learns how to tweak malicious binaries so that they can slip past antivirus tools and continue to work once unpacked and executed. Changing small sequences of bytes can fool AV engines, even ones that are also powered by artificial intelligence, he said. Anderson cited research by Google and others to show how changing just a few pixels in an image can cause classification software to mistake a bus for an ostrich.
“All machine learning models have blind spots,” he said. “Depending on how much knowledge a hacker has they can be convenient to exploit.”
So the team built a fairly simple mechanism to develop weaponised code by making very small changes to malware and firing these variants at an antivirus file scanner. By monitoring the response from the engine they were able to make lots of tiny tweaks that proved very effective at crafting software nasties that could evade security sensors.
The malware-tweaking machine-learning software was trained over 15 hours and 100,000 iterations, and then lobbed some samples at an antivirus classifier. The attacking code was able to get 16 per cent of its customized samples past the security system’s defenses, we're told.
This software-generation software will be online at the firm’s Github page and Anderson encouraged people to give it a try. No doubt security firms will also be taking a long look at how this affects their products in the future. ®
Read more; AI quickly cooks malware that AV software can't spot
Last edited by a moderator: