Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Software
Security Apps
Other security for Windows, Mac, Linux
Cylance - Targeted and Bypassed
Message
<blockquote data-quote="436880927" data-source="post: 825445"><p>I've already documented on this forum before that the technique you're all talking about can be used to bypass certain ML/AI systems - it depends on how they were designed.</p><p></p><p>Namely, I discussed VoodooAi in the past when I brought this up, as far back as two years ago at least.</p><p></p><p>There's other techniques you can apply alongside stealing strings from genuine applications: you can steal icons, file information, code and even digital signatures. All of these aforementioned techniques are realistically simple to apply and you can make tools to automate all of the work for you each time.</p><p></p><p>All I got was arguments and reported posts because people couldn't fathom the fact that ML/AI is and never will be invincible. People didn't like that I could make applications that simulated malicious behavior but always hit a safe threshold with certain ML/AI systems, even though I was just shining light on ML/AI design flaws because I myself use ML technologies and thus it is in my best interest to understand the flaws so I can work towards making my own systems more reliable.</p><p></p><p>There will always be a flaw if you look hard enough. Even if someone managed to fix every single known flaw in something, a new one will inevitably arise. Patches to design flaws can introduce new flaws. Nothing is perfect.</p><p></p><p>This is nothing new.</p></blockquote><p></p>
[QUOTE="436880927, post: 825445"] I've already documented on this forum before that the technique you're all talking about can be used to bypass certain ML/AI systems - it depends on how they were designed. Namely, I discussed VoodooAi in the past when I brought this up, as far back as two years ago at least. There's other techniques you can apply alongside stealing strings from genuine applications: you can steal icons, file information, code and even digital signatures. All of these aforementioned techniques are realistically simple to apply and you can make tools to automate all of the work for you each time. All I got was arguments and reported posts because people couldn't fathom the fact that ML/AI is and never will be invincible. People didn't like that I could make applications that simulated malicious behavior but always hit a safe threshold with certain ML/AI systems, even though I was just shining light on ML/AI design flaws because I myself use ML technologies and thus it is in my best interest to understand the flaws so I can work towards making my own systems more reliable. There will always be a flaw if you look hard enough. Even if someone managed to fix every single known flaw in something, a new one will inevitably arise. Patches to design flaws can introduce new flaws. Nothing is perfect. This is nothing new. [/QUOTE]
Insert quotes…
Verification
Post reply
Top