Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Security Statistics and Reports
[SE Labs] Endpoint Security: Home, Small Business, and Enterprise (Q1 2022)
Message
<blockquote data-quote="Andy Ful" data-source="post: 984249" data-attributes="member: 32260"><p>We talk about different things.</p><p>In my post, I did write that a relatively poor result in one test does not necessarily indicate that the protection in the wild during the time period of testing, must be poor. But, It does not mean that it cannot be poor. Simply, the results of a single test (missed samples) are usually a kind of illusion. By an accident, the illusion can be close to the truth.</p><p></p><p>The example with faulty pixels can give some insight into this issue. When you see a bunch of pixels that can resemble a woman without a nose, then you get it as not pretty. The woman in the wild can still have a nose and be pretty, or she can have a very little nose and look unattractive.</p><p></p><p>If you take as an example the tests from AVLab, then I am not sure if 20 000 samples in one test could be sufficient to see if one AV is "prettier" than another one. The AVLab tests are far more comprehensive than any test made by one person, and when we gather the results from the years 2019-2021 (over 17000 samples), we still cannot be sure if Defender is better than F-Secure (we know from other tests that probably not):</p><p>[URL unfurl="false"]https://malwaretips.com/threads/webroot-secureanywhere-ce-22-2-vs-1000-sample-exe-test.113168/post-983155[/URL]</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 984249, member: 32260"] We talk about different things. In my post, I did write that a relatively poor result in one test does not necessarily indicate that the protection in the wild during the time period of testing, must be poor. But, It does not mean that it cannot be poor. Simply, the results of a single test (missed samples) are usually a kind of illusion. By an accident, the illusion can be close to the truth. The example with faulty pixels can give some insight into this issue. When you see a bunch of pixels that can resemble a woman without a nose, then you get it as not pretty. The woman in the wild can still have a nose and be pretty, or she can have a very little nose and look unattractive. If you take as an example the tests from AVLab, then I am not sure if 20 000 samples in one test could be sufficient to see if one AV is "prettier" than another one. The AVLab tests are far more comprehensive than any test made by one person, and when we gather the results from the years 2019-2021 (over 17000 samples), we still cannot be sure if Defender is better than F-Secure (we know from other tests that probably not): [URL unfurl="false"]https://malwaretips.com/threads/webroot-secureanywhere-ce-22-2-vs-1000-sample-exe-test.113168/post-983155[/URL] [/QUOTE]
Insert quotes…
Verification
Post reply
Top