Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
Randomness in the AV Labs testing.
Message
<blockquote data-quote="Andy Ful" data-source="post: 905479" data-attributes="member: 32260"><p>The minimal differences in AVs protection were discussed many times on MT. It can be also deduced from AV Labs reports. Simply, all awarded AVs in one particular test in a particular award category should be considered to have the same in-the-wild protection (at the time of testing), despite slightly different scoring in the test. This different scoring is mainly due to statistic fluctuations.</p><p></p><p>These tests can have a meaning for AV vendors because the missed samples can help them to find detection errors and improve the detection engines.</p><p>The consumers can conclude something from these tests only when comparing the results of several tests made by several AV Labs. The simplest method is watching how often the AV was awarded, without watching how many % it has got in the test.</p><p>Some conclusions can be also made when the AV has got consistently top results (or consistently worse results) in several tests.</p><p></p><p>Much of the discussion on AV Lab test threads about the not good result of a particular AV or about its suddenly stellar result is useless. Most AVs have to get such results sooner or later due to pure statistics, without decreasing or increasing the real in-the-wild AV protection.</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 905479, member: 32260"] The minimal differences in AVs protection were discussed many times on MT. It can be also deduced from AV Labs reports. Simply, all awarded AVs in one particular test in a particular award category should be considered to have the same in-the-wild protection (at the time of testing), despite slightly different scoring in the test. This different scoring is mainly due to statistic fluctuations. These tests can have a meaning for AV vendors because the missed samples can help them to find detection errors and improve the detection engines. The consumers can conclude something from these tests only when comparing the results of several tests made by several AV Labs. The simplest method is watching how often the AV was awarded, without watching how many % it has got in the test. Some conclusions can be also made when the AV has got consistently top results (or consistently worse results) in several tests. Much of the discussion on AV Lab test threads about the not good result of a particular AV or about its suddenly stellar result is useless. Most AVs have to get such results sooner or later due to pure statistics, without decreasing or increasing the real in-the-wild AV protection. [/QUOTE]
Insert quotes…
Verification
Post reply
Top