Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
General Security Discussions
Adversarial Sample Generation: Making Machine Learning Systems Robust for Security
Message
<blockquote data-quote="silversurfer" data-source="post: 754044" data-attributes="member: 26718"><p>Source: <a href="https://blog.trendmicro.com/trendlabs-security-intelligence/adversarial-sample-generation-making-machine-learning-systems-robust-for-security/" target="_blank">Adversarial Sample Generation: Making Machine Learning Systems Robust for Security - TrendLabs Security Intelligence Blog</a></p><p></p><p>The history of antimalware security solutions has shown that malware detection is like a cat-and-mouse game. For every new detection technique, there’s a new evasion method. When signature detection was invented, cybercriminals used packers, compressors, metamorphism, polymorphism, and obfuscation to evade it. Meanwhile, API hooking and code injection methods were developed to evade behavior detection. By the time machine learning (ML) was used in security solutions, it was already expected that cybercriminals would develop new tricks to evade ML.</p><p></p><p>To be one step ahead of cybercriminals, one method of enhancing an ML system to counter evasion tactics is generating <a href="https://towardsdatascience.com/adversarial-examples-in-deep-learning-be0b08a94953" target="_blank">adversarial samples</a>, which are input data modified to cause an ML system to incorrectly classify it. Interestingly, while adversarial samples can be <a href="https://www.theregister.co.uk/2018/06/28/machine_translation_vulnerable/" target="_blank">designed</a> to cause ML systems to malfunction, they can also, as a result, be used to improve the efficiency of ML systems.</p></blockquote><p></p>
[QUOTE="silversurfer, post: 754044, member: 26718"] Source: [URL="https://blog.trendmicro.com/trendlabs-security-intelligence/adversarial-sample-generation-making-machine-learning-systems-robust-for-security/"]Adversarial Sample Generation: Making Machine Learning Systems Robust for Security - TrendLabs Security Intelligence Blog[/URL] The history of antimalware security solutions has shown that malware detection is like a cat-and-mouse game. For every new detection technique, there’s a new evasion method. When signature detection was invented, cybercriminals used packers, compressors, metamorphism, polymorphism, and obfuscation to evade it. Meanwhile, API hooking and code injection methods were developed to evade behavior detection. By the time machine learning (ML) was used in security solutions, it was already expected that cybercriminals would develop new tricks to evade ML. To be one step ahead of cybercriminals, one method of enhancing an ML system to counter evasion tactics is generating [URL='https://towardsdatascience.com/adversarial-examples-in-deep-learning-be0b08a94953']adversarial samples[/URL], which are input data modified to cause an ML system to incorrectly classify it. Interestingly, while adversarial samples can be [URL='https://www.theregister.co.uk/2018/06/28/machine_translation_vulnerable/']designed[/URL] to cause ML systems to malfunction, they can also, as a result, be used to improve the efficiency of ML systems. [/QUOTE]
Insert quotes…
Verification
Post reply
Top