Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
Modern protection without signatures – comparison test on real threats (Advanced In The Wild Malware Test)
Message
<blockquote data-quote="Andy Ful" data-source="post: 978124" data-attributes="member: 32260"><p>The results were different a few days ago, for sure (checked a few times). But, OK. Let's update my post.</p><p></p><p>Let's forget for a while about the last test and look at the test results from the years 2019-2021:</p><p></p><p>AVLab (over 17 000 samples in 16 tests, July 2019 - November 2021)</p><p>The table contains the missed samples in several tests ("x" means that AV did not participate).</p><p></p><p>.............................MONTH:.. J......S.....O...N....J....m...M...J....S....N....J...m...M...J....S....N..</p><p><strong>Avira Pro (Prime) ............... 0....12... 0...0... 0... 0... 1... 1... 1... 0... 0...0... 0... 0... 0...33 = 48</strong></p><p><strong>Defender ............................ x ... x ...17.. 0 .. x.. 20.. x... x... 0... x... 8... 0... 0 ...x... 2... x = 47</strong></p><p><strong>TrendMicro ........................ x ... x ... x ... x ... x.. 2..</strong><span style="color: rgb(184, 49, 47)"><strong>158</strong></span><strong> x ... x ...x ...x ...x ...x ...x ...x ... x = 160</strong></p><p><strong>F-Secure .......................... <span style="color: rgb(184, 49, 47)">103</span>.. x ...x ... 0 ...x ...x ....x ...x ...x ... x... 0... x ...x ...x ...x .. 0 = 103</strong></p><p><strong>Webroot ............................ x .... 0 ...x ... 0 ...0 ...0 ...0 ...1 ...0 ... 0... 0... 0...0... 0... 0.. 3 = 4</strong></p><p><strong></strong></p><p><strong>The results for Avira and Defender look pretty much normal. </strong>But, there are anomalous spikes in missed samples, especially for TrendMicro and F-Secure.</p><p>[SPOILER]</p><p>[ATTACH=full]264803[/ATTACH]</p><p></p><p>[ATTACH=full]264804[/ATTACH]</p><p>[/SPOILER]</p><p>Similar spikes can be seen for TrendMicro in the AV-Comparatives Malware Protection tests.</p><p><strong><span style="color: rgb(184, 49, 47)">The results for Webroot are also anomalous compared to many other tests (including Malware Hub).</span></strong></p><p>At this moment I cannot include the test from January 2022, because we have not cleared out why BAFS did not work for Defender in January 2022 and worked in some earlier tests. From the testing methodology, it follows that it should work.</p><p></p><p>So my conclusions are similar to the previous post, except that more AVs will be in Group 2:</p><p><strong>In the AVLab tests, we have three groups of AVs (at least).</strong></p><ul> <li data-xf-list-type="ul"><strong>Group 1 (Webroot) - almost all samples are known to these AVs,</strong></li> <li data-xf-list-type="ul"><strong>Group 2 (Avira, Defender, F-Secure, Trend Micro) - some "dead" samples are unknown to these AVs,</strong></li> <li data-xf-list-type="ul"><strong>Group 3 (Avast, Comodo, Emsisoft,...) - these AVs use file reputation, HIPS, detonation in the Sandbox, etc., so they do not care much about "dead" samples. I put Avast here because when files are executed with MOTW, Avast detonates suspicious files in the sandbox.</strong></li> </ul><p>In such a situation is very possible, that the missed samples are unknown to Group 2, because they never hit the customers and these vendors are simply slower with adding such ("dead") samples to the cloud signature database or behavior-based detections in the cloud.</p><p><strong><span style="color: rgb(184, 49, 47)">Still, I do not think that "dead" samples could explain the 22% missed samples for any popular AV.</span></strong></p><p><strong></strong></p><p><strong>The serious issue with the current test is that Defender's BAFS should work, but it obviously did not work.</strong></p><p></p><p>Generally, the AVLab tests are interesting due to the methodology that is different from other AV testing labs.</p><p>The AV-Comparatives had also some issues with Defender results (they removed Defender from one report).</p><p>When I post that some results are anomalous, that does not necessarily mean that they are wrong.</p><p>My questions are not intended to bash the tests but to understand them.<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" class="smilie smilie--sprite smilie--sprite130" alt="(y)" title="Thumbs up (y)" loading="lazy" data-shortname="(y)" /></p><p></p><p>Edit.</p><p>I will look at the protective layers of Webroot. If it has some features related to MOTW and reputation features, then Group 1 would be empty and Webroot should be included in Group 3.</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 978124, member: 32260"] The results were different a few days ago, for sure (checked a few times). But, OK. Let's update my post. Let's forget for a while about the last test and look at the test results from the years 2019-2021: AVLab (over 17 000 samples in 16 tests, July 2019 - November 2021) The table contains the missed samples in several tests ("x" means that AV did not participate). .............................MONTH:.. J......S.....O...N....J....m...M...J....S....N....J...m...M...J....S....N.. [B]Avira Pro (Prime) ............... 0....12... 0...0... 0... 0... 1... 1... 1... 0... 0...0... 0... 0... 0...33 = 48 Defender ............................ x ... x ...17.. 0 .. x.. 20.. x... x... 0... x... 8... 0... 0 ...x... 2... x = 47 TrendMicro ........................ x ... x ... x ... x ... x.. 2..[/B][COLOR=rgb(184, 49, 47)][B]158[/B][/COLOR][B] x ... x ...x ...x ...x ...x ...x ...x ... x = 160 F-Secure .......................... [COLOR=rgb(184, 49, 47)]103[/COLOR].. x ...x ... 0 ...x ...x ....x ...x ...x ... x... 0... x ...x ...x ...x .. 0 = 103 Webroot ............................ x .... 0 ...x ... 0 ...0 ...0 ...0 ...1 ...0 ... 0... 0... 0...0... 0... 0.. 3 = 4 The results for Avira and Defender look pretty much normal. [/B]But, there are anomalous spikes in missed samples, especially for TrendMicro and F-Secure. [SPOILER] [ATTACH type="full" alt="1646678067452.png"]264803[/ATTACH] [ATTACH type="full" alt="1646678130689.png"]264804[/ATTACH] [/SPOILER] Similar spikes can be seen for TrendMicro in the AV-Comparatives Malware Protection tests. [B][COLOR=rgb(184, 49, 47)]The results for Webroot are also anomalous compared to many other tests (including Malware Hub).[/COLOR][/B] At this moment I cannot include the test from January 2022, because we have not cleared out why BAFS did not work for Defender in January 2022 and worked in some earlier tests. From the testing methodology, it follows that it should work. So my conclusions are similar to the previous post, except that more AVs will be in Group 2: [B]In the AVLab tests, we have three groups of AVs (at least).[/B] [LIST] [*][B]Group 1 (Webroot) - almost all samples are known to these AVs,[/B] [*][B]Group 2 (Avira, Defender, F-Secure, Trend Micro) - some "dead" samples are unknown to these AVs,[/B] [*][B]Group 3 (Avast, Comodo, Emsisoft,...) - these AVs use file reputation, HIPS, detonation in the Sandbox, etc., so they do not care much about "dead" samples. I put Avast here because when files are executed with MOTW, Avast detonates suspicious files in the sandbox.[/B] [/LIST] In such a situation is very possible, that the missed samples are unknown to Group 2, because they never hit the customers and these vendors are simply slower with adding such ("dead") samples to the cloud signature database or behavior-based detections in the cloud. [B][COLOR=rgb(184, 49, 47)]Still, I do not think that "dead" samples could explain the 22% missed samples for any popular AV.[/COLOR] The serious issue with the current test is that Defender's BAFS should work, but it obviously did not work.[/B] Generally, the AVLab tests are interesting due to the methodology that is different from other AV testing labs. The AV-Comparatives had also some issues with Defender results (they removed Defender from one report). When I post that some results are anomalous, that does not necessarily mean that they are wrong. My questions are not intended to bash the tests but to understand them.(y) Edit. I will look at the protective layers of Webroot. If it has some features related to MOTW and reputation features, then Group 1 would be empty and Webroot should be included in Group 3. [/QUOTE]
Insert quotes…
Verification
Post reply
Top