- Jul 6, 2017
- 2,392
True, it has an incredible false positive rate, for a long time.I can confirm that F-Secure is the king of false positives.
Quite honestly as an experienced user Windows Defender would be more than enough for me, but due to it's Performance hit, I rather use Kaspersky that doesn't seem to affect my system Performance at all.Microsoft (Windows Defender) unbelievable: 99,1% (with 0.9% user dependant). I remember (in Windows 7) days when MSE was shown as baseline often with a lower bloack rate of 60%
As a matter of fact, ESET has never shined because of its default settings (settings independent testers mostly use). ESET has never shined either by detecting threats in the moment, despite they update their databases pretty quick.ESET......worse and worse
Microsoft (Windows Defender) unbelievable: 99,1% (with 0.9% user dependant). I remember (in Windows 7) days when MSE was shown as baseline often with a lower bloack rate of 60%
Avira and Kaspersky seem to have fallen off quite a bit from the previous tests
The samples used are old by today's standards.
ESET......worse and worse
So you are telling me that Windows Defender does not has improved, but the test has declined in relevance .
That would not explain why Microsoft used to end last (substantially lower than other AV's) and now ens up in the middle of the pack.
If you throw a lot of new stuff at it, it is going end up dead last.
AV-Comparatives: Real-World Protection Test - October 2017ESET......worse and worse
The results are based on the test set of 316 live test cases (malicious URLs found in the field),
consisting of working exploits (i.e. drive-by downloads) and URLs pointing directly to malware. Thus
exactly the same infection vectors are used as a typical user would experience in everyday life. The
test-cases used cover a wide range of current malicious sites and provide insights into the protection
given by the various products (using all their protection features) while surfing the web.
The Malware Protection Test assesses a security program’s ability to protect a system against infection
by malicious files before, during or after execution. The methodology used for each product tested is
as follows. Prior to execution, all the test samples are subjected to on-access and on-demand scans by
the security program, with each of these being done both offline and online. Any samples that have
not been detected by any of these scans are then executed on the test system, with Internet/cloud
access available, to allow e.g. behavioural detection features to come into play. If a product does not
prevent or reverse all the changes made by a particular malware sample within a given time period,
that test case is considered to be a miss. If the user is asked to decide whether a malware sample
should be allowed to run, and in the case of the worst user decision system changes are observed, the
test case is rated as “user-dependent”.
If you throw a lot of new stuff at it, it is going end up dead last.