Windows_Security

Level 23
Verified
Trusted
Content Creator
Hi,

I have met Andreas and Peter from AV-Comparatives a few years ago at Prague when they were speakers at an Avast held event (because they had taken over AVG). Their real world test is based on infections delivered through browser. So when people start to download malware packs and execute them (allowing UAC elevation), then the result are different. Same applies to slightly changed samples which are executed from disk.

To the (valued and respected) forum members - who download malware packs and run them in virtual machine to find out how AV's do in such a settting - I can also say (tongue in cheek): there is no (software) medicine against user stupidity. So when people are so naief to run software from unknown sources, what would you expect? It is like igniting Chinese fireworks in a closed toilet and being surprised you got your @$$ burned.

Tests executed in a different setting than AV-Comparatives real world tests will always show different results. This does not say anything about the quality of the AV-C test method (which is ISO and EICAR certified). They collect samples over a period (usually a month) and run them simultaneously on PC's in seperated environt each with their own seperated internet connection.

The only non-transparent part of their test setup is that they don't publish any information about the samples used (like age and malware families). When I do the math (using normal spread) on their average sample size (around 150-200 collected in a month), this means that probably only 5 to 7 samples are less than a day old (see *EDIT*). This means that any AV scoring less than 99.5% probably misses all those fresh samples. :sick:

With this in mind a protection percentage of 99.5% suddenly does not look that good (see second thought *EDIT*) But that is only true for the first victim. Average chance of being first victim is less than 0.0002% to 0.0004% (I once did the math on another security forum), so in real world situation this protection is not that bad. For comparison, the chances of being involved in a traffic incident are higher and the chances of surviving an aircraft incident are much lower. But I tend to look at the bright side of life (so run a default deny * :) )


------------
EDIT
:) when watching de soccer match "Les Bataves contre les Blues" last night I suddenly had an epitaph (goalkeeper of France was really great, which made the Panenka penalty a beauty to watch) after the 2-0 for Holland.

On second thoughts the assumption of collecting samples and then testing them, seems unlikely since most malware links are only active a few days. It is more likely they test continuously, so number of fresh samples should be higher and protection of 99.5% probably has more glamour.

Regards Kees

* Default Deny:
- On Windows 10 laptop with Windows Defender and Andy Ful's Hard Configurator + Document Anti-Exploit
- On Windows 7 desktop AppLocker (allow only Microsoft, Mozilla and Google signed) + Kardo's Crystal Security (using it as coal mine parkeet for user folders executions)
 
Last edited:

TairikuOkami

Level 25
Verified
Content Creator
I'm tired of these tests. They need to specify WD + smartscreen and WD without smartscreen
Indeed, like in MRG tests, but then the tests would be too obvious. No wonder, MRG is being sued by AV companies all the time. :)

To put it simply, any AV + smartscreen will perform better than WD, any time.
 

Attachments

Lord Ami

Level 19
Verified
Trusted
Malware Hunter
Hi,

I have met Andreas and Peter from AV-Comparatives a few years ago at Prague when they were speakers at an Avast held event (because they had taken over AVG). Their real world test is based on infections delivered through browser. So when people start to download malware packs and execute them (allowing UAC elevation), then the result are different. Same applies to slightly changed samples which are executed from disk.

To the (valued and respected) forum members - who download malware packs and run them in virtual machine to find out how AV's do in such a settting - I can also say (tongue in cheek): there is no (software) medicine against user stupidity. So when people are so naief to run software from unknown sources, what would you expect? It is like igniting Chinese fireworks in a closed toilet and being surprised you got your @$$ burned.

Tests executed in a different setting than AV-Comparatives real world tests will always show different results. This does not say anything about the quality of the AV-C test method (which is ISO and EICAR certified). They collect samples over a period (usually a month) and run them (in batches) simultaneously on PC's in seperated environt each with their own seperated internet connection.

The only non-transparent part of their test setup is that they don't publish any information about the samples used (like age and malware families). When I do the math on their average sample size (on average around 150-200 collected in a month), this means that 5 to 7 samples are less than a day old. This means that any AV scoring less than 99.5% probably misses all those fresh samples. :sick:

Regards Kees
+1

MH tests are never represented as "real world" scenario tests. And they are far from it (virtual machine itself sets some restrictions to many malware samples).
But when we put all this aside, it's still interesting to "test" and see how products perform. Even though it does not paint a adequate picture of product's strengths/weaknesses.
 

jackuars

Level 23
Verified
Hi,

I have met Andreas and Peter from AV-Comparatives a few years ago at Prague when they were speakers at an Avast held event (because they had taken over AVG). Their real world test is based on infections delivered through browser. So when people start to download malware packs and execute them (allowing UAC elevation), then the result are different. Same applies to slightly changed samples which are executed from disk.

To the (valued and respected) forum members - who download malware packs and run them in virtual machine to find out how AV's do in such a settting - I can also say (tongue in cheek): there is no (software) medicine against user stupidity. So when people are so naief to run software from unknown sources, what would you expect? It is like igniting Chinese fireworks in a closed toilet and being surprised you got your @$$ burned.

Tests executed in a different setting than AV-Comparatives real world tests will always show different results. This does not say anything about the quality of the AV-C test method (which is ISO and EICAR certified). They collect samples over a period (usually a month) and run them (in batches) simultaneously on PC's in seperated environt each with their own seperated internet connection.

The only non-transparent part of their test setup is that they don't publish any information about the samples used (like age and malware families). When I do the math on their average sample size (on average around 150-200 collected in a month), this means that 5 to 7 samples are less than a day old. This means that any AV scoring less than 99.5% probably misses all those fresh samples. :sick:

With this in mind a protection percentage of 99.5% suddenly does not look that good. But that is only true for the first victim. Average chance of being first victim is less than 0.0002% to 0.0004% (I once did the math on wilders), so in real world situation this protection is not that bad. For comparison, the chances of being involved in a traffic incident are higher and the chances of surviving an aircraft incident are much lower. But I tend to look at the bright side of life (so run a default deny without AV :) )

Regards Kees
Thanks for this valuable comment. Aren't the testing labs also looking at malware spread through pen drives which are insanely common. So we aren't talking about just browser based.
 

Windows_Security

Level 23
Verified
Trusted
Content Creator
Thanks for this valuable comment. Aren't the testing labs also looking at malware spread through pen drives which are insanely common. So we aren't talking about just browser based.
Their file protection test covers that. This thread is about their real world tests, which is based on "internet" threats (link), so browser based should be internet based.
 
5

509322

This test is trying to represent the environment of a "typical" user, one who is not downloading cracks in torrented zip files, or turning off his AV to install warez, or running packs of malware on his desktop just for fun.
If you are any of the above, this test is not reflective of your environment.
That''s precisely what they are testing, more or less.
 

DeepWeb

Level 25
Verified
MalwareHub testing does not try to represent the environment of a "typical" user. It tests the raw power of the AV against relatively unknown threats.
That's the difference, in a nutshell.
Well if an AV holds up against unknown threats, then it should be good for typical users as well. If everything scores 100% maybe they should make the tests harder? Just an idea borrowed from my professor lmao
 

Andy Ful

Level 52
Verified
Trusted
Content Creator
...
Even @Andy Ful has stated that a user would be better off installing Kaspersky and learning to configure it.
From the security point of view, KIS (but not KFA) has some important modules like Application Control (AC). If the user can learn to set it properly, then he/she will have more comprehensive protection as compared to Windows Defender (WD) even with ASR, Network Protection and Controlled Folder Access.

I do not know if this advantage can balance the possible compatibility/stability problems of KIS installed on Windows 10 - that can be an advantage for WD.

Anyway, this probably will improve the security only of some users, because AC in KIS cannot say that the file is malicious. Most users who download cracks and pirated software, will simply turn off AC and install the crack anyway. For them, it is really not important which AV they use.
 
5

509322

From the security point of view, KIS (but not KFA) has some important modules like Application Control (AC). If the user can learn to set it properly, then he/she will have more comprehensive protection as compared to Windows Defender (WD) even with ASR, Network Protection and Controlled Folder Access.

I do not know if this advantage can balance the possible compatibility/stability problems of KIS installed on Windows 10 - that can be an advantage for WD.

Anyway, this probably will improve the security only of some users, because AC in KIS cannot say that the file is malicious. Most users who download cracks and pirated software, will simply turn off AC and install the crack anyway. For them, it is really not important which AV they use.
1. Unfortunately, the way Kaspersky designed Application Control, it takes investigation and practice to use it effectively. Right there, that required effort, immediately eliminates 99 % of the world population.

2. Kaspersky does block legit stuff in various ways, but at the same time it has the best options to solve these issues. However, see 1 above.

3. Kaspersky, like every other publisher, cannot protect people from themselves. People create their own "Doh !" moments.
 

Andy Ful

Level 52
Verified
Trusted
Content Creator
Indeed, like in MRG tests, but then the tests would be too obvious. No wonder, MRG is being sued by AV companies all the time. :)

To put it simply, any AV + smartscreen will perform better than WD, any time.
This test is outdated, because in April 2018 Microsoft added scripts and macros to "Block at first sight" feature. This feature is enabled by default in WD. I noticed that from the May 2018 the rate of user dependent actions dropped significantly. The average scoring for WD in AV-Comparatives tests (average for the last six months) is 99.8% (+0.1% user dependent), so only 0.1% can be related to SmartScreen (as compared to average 1.6% for July 2017-April 2018 period).
This does not mean that WD can properly secure every user, but shows that WD has improved its detection without SmartScreen.
 
5

509322

This test is outdated, because in April 2018 Microsoft added scripts and macros to "Block at first sight" feature. This feature is enabled by default in WD. I noticed that from the May 2018 the rate of user dependent actions dropped significantly. The average scoring for WD in AV-Comparatives tests (average for the last six months) is 99.8% (+0.1% user dependent), so only 0.1% can be related to SmartScreen (as compared to average 1.6% for July 2017-April 2018 period).
This does not mean that WD can properly secure every user, but shows that WD has improved its detection without SmartScreen.
I know BAFS has been in Windows Security since 1607. However, just pointing this out...

The most recent official Microsoft documentation that I saw states that Block at First Sight must be properly configured by the user - and then only on Pro or higher - if certain pre-requisites are met.

Their own official documentation states Black at First Sight is only available when using Windows Defender ATP. Windows Defender ATP is the primary pre-requisite.
 
Last edited by a moderator:
5

509322

Nobody here should believe that Windows Defender on default settings is better than most other AVs, only because the results of one AV-Comparatives test. So, let's look at the results from the last 6 months:
Bit Defender 100+100+100+100+100+100 = 100
Microsoft 100+100+99.5 + (0.5)+100+100+99.5 +5 = 99.8 (+0.1 user dependent)
Kaspersky 99+99.5+100+100+99.6+100 = 99.7
Avast 99.5+100+99.5+98.9+99.6+100 = 99.6

Two questions.
Why did not Kaspersky have a good result in May (99%)?
Probably the coincidence or maybe not all modules worked as they should work, after the major update in April (it would not be the first time).

Why Microsoft has consistently good results for some months? It may be a coincidence or maybe it follows from adding scripts and macros to "Block at first sight" feature (introduced in Windows 10 ver. 1803). If one would look at the Microsoft results from about a year ago (July 2017 - April 2018) then Microsoft would score at 98.3 (99.9 including user dependent actions).

Anyway, those results have nothing to do with the real protection of many users, because in the real world the users are infected mostly by ignoring AV detection, running cracks or pirated software, etc. Those infection vectors cannot be properly measured by any AV Lab.
The differences in protection performance are probably not statistically meaningful. In other words, people should just use what they like and stick with it.

But that's not how people generally think and behave... they just want 100 %. All that matters to them is "What is best AV ?" because hey don't know that there is no statistically meaningful difference between 95 % and 100 % in actual day-to-day computing for the typical home user.
 
Last edited by a moderator:

Evjl's Rain

Level 44
Verified
Trusted
Content Creator
Malware Hunter
This test is outdated, because in April 2018 Microsoft added scripts and macros to "Block at first sight" feature. This feature is enabled by default in WD. I noticed that from the May 2018 the rate of user dependent actions dropped significantly. The average scoring for WD in AV-Comparatives tests (average for the last six months) is 99.8% (+0.1% user dependent), so only 0.1% can be related to SmartScreen (as compared to average 1.6% for July 2017-April 2018 period).
This does not mean that WD can properly secure every user, but shows that WD has improved its detection without SmartScreen.
Are you sure if these are correct?
According to my last test of WD in default settings in August 2018, it had no improvement. Also did a few tests off-screen, WD never reacted to anything that signatures failed to detect, just a few old payloads downloaded by scripts
Block at first sign never works without tweaking
https://malwaretips.com/threads/13-08-2018-19.85938/#post-756888

I disagree about user-dependent. Sometimes, when testing, WD prompted me to manually remove the threats or to reboot for complete removal => I think they count this as user-dependent, not smartscreen. They count smartscreen as a block, so do FPs
 
5

509322

Are you sure if these are correct?
According to my last test of WD in default settings in August 2018, it had no improvement. Also did a few tests off-screen, WD never reacted to anything that signatures failed to detect, just a few old payloads downloaded by scripts
Block at first sign never works without tweaking
https://malwaretips.com/threads/13-08-2018-19.85938/#post-756888

I disagree about user-dependent. Sometimes, when testing, WD prompted me to manually remove the threats or to reboot for complete removal => I think they count this as user-dependent, not smartscreen. They count smartscreen as a block, so do FPs
I can confirm your test results. It gets to the point where one doesn't bother testing Windows Defender any longer because it is so easy to defeat. J

Just use a USB flash drive. Instant system pwn. (I would think you are aware of this because of the fact that infections spread by USB devices is wide-spread in south central and east Asia.)
 

noob guy

Level 1
1. Unfortunately, the way Kaspersky designed Application Control, it takes investigation and practice to use it effectively. Right there, that required effort, immediately eliminates 99 % of the world population.

2. Kaspersky does block legit stuff in various ways, but at the same time it has the best options to solve these issues. However, see 1 above.

3. Kaspersky, like every other publisher, cannot protect people from themselves. People create their own "Doh !" moments.
I think this is why the layered approach is still the best, rather than depending entirely on just one AV or AV suite, particularly for us lesser mortals combining free softwares for financial reasons. A free AV that can be tweaked plus a default deny (VS/OSA) is perhaps enough for most of us home users who don't have dangerous internet habits.
 
5

509322

I think this is why the layered approach is still the best, rather than depending entirely on just one AV or AV suite, particularly for us lesser mortals combining free softwares for financial reasons. A free AV that can be tweaked plus a default deny (VS/OSA) is perhaps enough for most of us home users who don't have dangerous internet habits.
Learning is the best thing possible.

A layered approach is always the best approach.

When it comes to paid versus free, what you are really paying for is the support.

When it comes to security matters, there are always 100 sides to the coin. It all depends upon what side of the coin is being discussed.
 

Robbie

Level 30
Verified
Content Creator
Malware Tester
It's not bashing Windows Defender, it's pointing out facts, as Lockdown mentioned.

I strongly believe, feel free to disagree, Microsoft should focus on stability and bug resolving on Defender before adding more modules such as "sandbox".. Once you've got a stable product with the most reduced ammount of bugs, start developing anti-executable techniques and modules, like the block at first sight function. If they can deal with Application Control modules and similar instead of signatures, I'm pretty sure Windows Defender would be much more efficient.

I just find it amusing how they can't make it stable or at least more usable, being the devs for the OS it's default installed in. Anyways, after 1809 it doesn't suprise me.