Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
Randomness in the AV Labs testing.
Message
<blockquote data-quote="Andy Ful" data-source="post: 905941" data-attributes="member: 32260"><p>So, what we can say about AV-Comparatives Malware tests?</p><p>There were about 15000 different malware variants in March 2020 (according to the SonicWall statistics) and the statistical model from the previous post assumed m = 300000 malware (large pule of samples). It is probable that the tested samples included many polymorphic brothers, so one SonicWall malware variant was duplicated on average 20 times (20*15000 = 300000).</p><p>I am not sure (<strong><span style="color: rgb(184, 49, 47)">I will check it tomorrow</span></strong>), but the large set of samples could be also several times greater if we would proportionally increase the number k of missed samples (<span style="color: rgb(184, 49, 47)"><strong>confirmed, the differences are only minimal and not important even for m=8000000, k=1600</strong></span>).</p><p></p><p>Now we can also understand what probably happened to TrendMicro which was compromised 82 times in the March test and was not compromised at all in September 2019.</p><p>[URL unfurl="true"]https://www.av-comparatives.org/tests/malware-protection-test-march-2020/[/URL]</p><p>[URL unfurl="true"]https://www.av-comparatives.org/tests/malware-protection-test-september-2019/[/URL]</p><p></p><p>TrendMicro could simply miss a few SonicWall malware variants that have many polymorphic variations among the tested samples. Other AVs apparently detected the malware samples that had many polymorphic variations.</p><p></p><p>Furthermore, the k=60 (missed samples in the large set) is very small as compared to m=300000 (number of samples in the large set). It means that the samples were mostly not 0-day, but rather a few days old on average.</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 905941, member: 32260"] So, what we can say about AV-Comparatives Malware tests? There were about 15000 different malware variants in March 2020 (according to the SonicWall statistics) and the statistical model from the previous post assumed m = 300000 malware (large pule of samples). It is probable that the tested samples included many polymorphic brothers, so one SonicWall malware variant was duplicated on average 20 times (20*15000 = 300000). I am not sure ([B][COLOR=rgb(184, 49, 47)]I will check it tomorrow[/COLOR][/B]), but the large set of samples could be also several times greater if we would proportionally increase the number k of missed samples ([COLOR=rgb(184, 49, 47)][B]confirmed, the differences are only minimal and not important even for m=8000000, k=1600[/B][/COLOR]). Now we can also understand what probably happened to TrendMicro which was compromised 82 times in the March test and was not compromised at all in September 2019. [URL unfurl="true"]https://www.av-comparatives.org/tests/malware-protection-test-march-2020/[/URL] [URL unfurl="true"]https://www.av-comparatives.org/tests/malware-protection-test-september-2019/[/URL] TrendMicro could simply miss a few SonicWall malware variants that have many polymorphic variations among the tested samples. Other AVs apparently detected the malware samples that had many polymorphic variations. Furthermore, the k=60 (missed samples in the large set) is very small as compared to m=300000 (number of samples in the large set). It means that the samples were mostly not 0-day, but rather a few days old on average. [/QUOTE]
Insert quotes…
Verification
Post reply
Top