Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
General Security Discussions
Why I think testing "labs" are useless
Message
<blockquote data-quote="MacDefender" data-source="post: 924495" data-attributes="member: 83059"><p>I like it when the tests reveal enough information about their samples that we can draw meaningful real world conclusions.</p><p></p><p>For example, years ago this community was one of the first to start noting how BitDefender’s signatures lagged for zero day protection. That drove the intuition that if you choose a BD product it needs to be combined with something else that protects against emerging threats — whether that’s a second engine or a dynamic behavior blocker.</p><p></p><p>Even recently, @harlan4096’s tests showed that F-Secure had repeated blind spots to both scriptors as well as broad whitelisting of legitimate processes that could be abused (Python, Node.JS, etc).</p><p></p><p>When a test just says that product A detected 490 out of 500 threats and product B detected 480 out of 500, I think it’s crazy that people switch AV software based off those kinds of results. In a lot of the lab tests there simply is not enough spread between the products and the tests exaggerate slight differences, less than a few percent detection differences. Worse, other labs conflate non-protection scores in with the overall score (ease of use, false positive on a largely irrelevant battery of software, etc)</p></blockquote><p></p>
[QUOTE="MacDefender, post: 924495, member: 83059"] I like it when the tests reveal enough information about their samples that we can draw meaningful real world conclusions. For example, years ago this community was one of the first to start noting how BitDefender’s signatures lagged for zero day protection. That drove the intuition that if you choose a BD product it needs to be combined with something else that protects against emerging threats — whether that’s a second engine or a dynamic behavior blocker. Even recently, @harlan4096’s tests showed that F-Secure had repeated blind spots to both scriptors as well as broad whitelisting of legitimate processes that could be abused (Python, Node.JS, etc). When a test just says that product A detected 490 out of 500 threats and product B detected 480 out of 500, I think it’s crazy that people switch AV software based off those kinds of results. In a lot of the lab tests there simply is not enough spread between the products and the tests exaggerate slight differences, less than a few percent detection differences. Worse, other labs conflate non-protection scores in with the overall score (ease of use, false positive on a largely irrelevant battery of software, etc) [/QUOTE]
Insert quotes…
Verification
Post reply
Top