Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
Consumer Summary Report 2021
Message
<blockquote data-quote="Andy Ful" data-source="post: 972505" data-attributes="member: 32260"><p>The problem of reliable AV testing is very complex and requires a lot of resources and time. The current testing methodologies were developed for over 20 years and they are still far from being perfect. Each of the well-respected AV testing labs uses a slightly different methodology, so one has to be careful when comparing their results.</p><p></p><p><strong>The main problem for the readers is understanding that the results of one particular test cannot reflect the real protection, because the tested samples are only a small percent of all samples in the wild. Furthermore, different AVs can miss different samples and detect some samples missed by other AVs.</strong></p><p></p><p>Usually, the chance that a particular AV can miss 0 samples in one particular test is similar to missing a few samples in another test, even if the samples were chosen from the same set of all samples available in the wild. This can be calculated and follows from statistical considerations. One has to use statistics because there is a tremendous number of ways to choose a small pule of samples (usually a few hundred) from all samples available in the wild (usually a few million). Each choice can give a slightly different number of missed samples and has nothing to do with a lower or higher AV protection.</p><p></p><p>So, if the reader can see (in the AV-Comparatives Real-World tests) that Kaspersky missed 0 samples in March and 3 samples in September, this does not prove in any way that the protection of Kaspersky was lower in September. We can see this constantly in the tests for most AVs.</p><p>Similarly, if Kaspersky missed 3 samples in a particular test and another AV missed 0 samples in the same test, this does not prove in any way that the protection of Kaspersky was lower compared to this AV. This uncertainty is caused by using a random pule of tested samples, because this pule of samples is a small part of all available samples (all samples in the world).</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 972505, member: 32260"] The problem of reliable AV testing is very complex and requires a lot of resources and time. The current testing methodologies were developed for over 20 years and they are still far from being perfect. Each of the well-respected AV testing labs uses a slightly different methodology, so one has to be careful when comparing their results. [B]The main problem for the readers is understanding that the results of one particular test cannot reflect the real protection, because the tested samples are only a small percent of all samples in the wild. Furthermore, different AVs can miss different samples and detect some samples missed by other AVs.[/B] Usually, the chance that a particular AV can miss 0 samples in one particular test is similar to missing a few samples in another test, even if the samples were chosen from the same set of all samples available in the wild. This can be calculated and follows from statistical considerations. One has to use statistics because there is a tremendous number of ways to choose a small pule of samples (usually a few hundred) from all samples available in the wild (usually a few million). Each choice can give a slightly different number of missed samples and has nothing to do with a lower or higher AV protection. So, if the reader can see (in the AV-Comparatives Real-World tests) that Kaspersky missed 0 samples in March and 3 samples in September, this does not prove in any way that the protection of Kaspersky was lower in September. We can see this constantly in the tests for most AVs. Similarly, if Kaspersky missed 3 samples in a particular test and another AV missed 0 samples in the same test, this does not prove in any way that the protection of Kaspersky was lower compared to this AV. This uncertainty is caused by using a random pule of tested samples, because this pule of samples is a small part of all available samples (all samples in the world). [/QUOTE]
Insert quotes…
Verification
Post reply
Top