Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
Randomness in the AV Labs testing.
Message
<blockquote data-quote="plat" data-source="post: 909598" data-attributes="member: 74969"><p>Assuming each malware sample has equal weight, statistically, the larger the sample size AND the larger the subject pool, the less statistically significant is each missed sample. Here's where it gets "fun" to add the color red to the graphs! Drama, drama look at the difference--when it could be statistically not significant. </p><p></p><p>Your calculations are the utopia of AV lab testing, Andy Ful. They should all be so clean. Further, as someone else stated above me, one quarter's results should not be the gospel ad infinitum for a given brand as there is so much inherent variability--both for samples and subjects. These lab tests aren't clean; it's virtually impossible. </p><p></p><p>Now you have the addition of "deviations from defaults" as part of the testing regimen of <a href="https://www.wilderssecurity.com/threads/av-comparatives-business-test-factsheet-august-september-2020.433374/" target="_blank">AV Comparatives </a>for Business, for one. That's where the serious money is, both in security solutions and targets for threats like ransomware. Not all "deviations" are created equal, right?</p></blockquote><p></p>
[QUOTE="plat, post: 909598, member: 74969"] Assuming each malware sample has equal weight, statistically, the larger the sample size AND the larger the subject pool, the less statistically significant is each missed sample. Here's where it gets "fun" to add the color red to the graphs! Drama, drama look at the difference--when it could be statistically not significant. Your calculations are the utopia of AV lab testing, Andy Ful. They should all be so clean. Further, as someone else stated above me, one quarter's results should not be the gospel ad infinitum for a given brand as there is so much inherent variability--both for samples and subjects. These lab tests aren't clean; it's virtually impossible. Now you have the addition of "deviations from defaults" as part of the testing regimen of [URL='https://www.wilderssecurity.com/threads/av-comparatives-business-test-factsheet-august-september-2020.433374/']AV Comparatives [/URL]for Business, for one. That's where the serious money is, both in security solutions and targets for threats like ransomware. Not all "deviations" are created equal, right? [/QUOTE]
Insert quotes…
Verification
Post reply
Top