AV-Comparatives Real-World protection July-Nov

Rebsat

Level 6
Verified
Well-known
Apr 13, 2014
254
@harlan4096

Q1: Would you please give us your opinion on the following result of Kaspersky Internet Security?
Test cases: 998
Blocked 993
Compromised 5

Q2: How can Kaspersky Internet Security be compromised in total of 5 Test cases? How is this possible?

Thank you very much for your answers bro (y)
 
Last edited:

harlan4096

Moderator
Verified
Staff Member
Malware Hunter
Well-known
Apr 28, 2015
8,635
Not much to say, no av gives 100% and You can't trust those results... if I remember well Panda Dome Free got 100% in some tests even better than KIS (paid)... now go to our MalWare Hub and check/compare results of the tests KFA2019 (free) <> Panda Dome Free (HELPED IN COMBO WITH NVT TOOLS LIKE NVT OSA OR NVT SYSHARDENED and testing the packs ABOUT 14 OR 16 hours later publishing the pack to give it some time)... no colour...

Also default Kaspersky settings may be weak against some threats...
 
Last edited:

Nightwalker

Level 24
Verified
Honorary Member
Top Poster
Content Creator
Well-known
May 26, 2014
1,339
I am suspicious that most antivirus vendors simple learned how to play the "game" of "Real World Protection".

I guess vendors like Panda use Machine Learning/Automatization + Cloud to block most of malwares with their Web Filter module, I think that with telemetry plus some dedicated "researchers" they can make their solution score much higher than it should in AV Comparatives tests.

It is somewhat akin to cheat, they know more or less what kind of sources/honeypots that AV Comparatives uses and so they agressively block "everything" with Web Filter module and because of how fast the links "die", those vendors arent harmed by false positives.

I think that most of those threats wouldnt be blocked by Panda without its Web Filter ...

TLDR: Panda and many others vendors are much weaker in the" true real world" protection scenario, they are too busy "stealing" Kaspersky detections or flagging everything without analysis to actually bother to improve their engine.


The Holy Grail of AV Testing, and Why It Will Never Be Found

Kaspersky defends false detection experiment


@Rebsat @harlan4096
 

Arequire

Level 29
Verified
Top Poster
Content Creator
Feb 10, 2017
1,814
Right under the ridiculous bar graph, people:
We would like to point out that while some products may sometimes be able to reach 100% protection rates in a test, it does not mean that these products will always protect against all threats on the web. It just means that they were able to block 100% of the widespread malicious samples used in a test.
 
F

ForgottenSeer 72227

@harlan4096

Q1: Would you please give us your opinion on the following result of Kaspersky Internet Security?
Test cases: 998
Blocked 993
Compromised 5

Q2: How can Kaspersky Internet Security be compromised in total of 5 Test cases? How is this possible?

Thank you very much for your answers bro (y)

No product is 100% perfect despite what the test results may say. Kaspersky is a very god product, but it too can be bypassed by malware. Every test you see should be taken with a grain of salt. The problem is, all people look for are the graphs and as a result get a false sense of "wow x product gets 100%, that means I will never get infected no matter what I do". Truth is, you CAN get infected if you practice unsafe habits. That's why it's very important to use good/safe computing habits regardless of which programs/setup you choose to use.

Not much to say, no av gives 100% and You can't trust those results... if I remember well Panda Dome Free got 100% in some tests even better than KIS (paid)... now go to our MalWare Hub and check/compare results of the tests KFA2019 (free) <> Panda Dome Free (HELPED IN COMBO WITH NVT TOOLS LIKE NVT OSA OR NVT SYSHARDENED and testing the packs ABOUT 14 OR 16 hours later publishing the pack to give it some time)... no colour...

Also default Kaspersky settings may be weak against some threats...

Spot on (y)

Right under the ridiculous bar graph, people:

This is the other problem I have with these tests. All people look at are the graphs, but don't take the time read, or know how to properly interpret them. More often than not, if you take the time to read the report, there's a ton of useful information there. It helps to clarify the results and get a better understanding of them. Your quote from the result graph is a good example. Another is the FP chart that AV-comparatives includes in the report. It breaks the FP's down in to the "prevalence of the actual file(s)", but all people look at is the total number and that's it. Are they from well known programs, or are they coming from programs that no one has heard of or uses? Doesn't mean that a particular product can't improve on it's FP rate, but it does give you a sense of where its coming from.

How can you trust labs always giving 90+% scores when they could easily make them drop to less than 50%...
Well how can they, their marketing strategy would be toast:p
 
Last edited by a moderator:

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363
Scriptors, real 0-hours/days, fileless malwares., etc..etc...

Yes.

Creating a tougher test methodology and better demonstrating weaknesses (and strengths) would benefit us (consumers) greatly. SE Labs is willing to show the products that absolutely get whacked... but often, we don't see a lot of those scores as they are hidden. Even my preferred test lab often does not list the bottom dwellers if vendors are willing to pay to hide the result. It's an unfortunate reality in that business. You need revenue to survive... or become like Dennis Technolgy Labs..

1546187251910.png


1546187545792.png
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
Yes.

Creating a tougher test methodology and better demonstrating weaknesses (and strengths) would benefit us (consumers) greatly. SE Labs is willing to show the products that absolutely get whacked... but often, we don't see a lot of those scores as they are hidden. Even my preferred test lab often does not list the bottom dwellers if vendors are willing to pay to hide the result. It's an unfortunate reality in that business. You need revenue to survive... or become like Dennis Technolgy Labs..

View attachment 204666

View attachment 204667
That was an old report (https://selabs.uk/download/consumers/jan-mar-2018-consumer.pdf).
The last report looks different, but also shows the wide spectrum of results:
SE Labs July-September.png

https://selabs.uk/download/consumers/epp/2018/jul-sep-2018-consumer.pdf
 
Last edited:

Evjl's Rain

Level 47
Verified
Honorary Member
Top Poster
Content Creator
Malware Hunter
Apr 18, 2016
3,684
I recently disinfected some PCs protected by WD and smartscreen. A few were heavily infected
same observation: all use a download manager
1 special case: infected by a low-detection malware from a password-protected zip file (emsisoft EK missed, zemana detected)

therefore, I'm totally not convinced by WD's results in various labs. They are comparing whitelisting (BAFS and smartscreen) to blacklisting solutions
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
I recently disinfected some PCs protected by WD and smartscreen. A few were heavily infected
same observation: all use a download manager
1 special case: infected by a low-detection malware from a password-protected zip file (emsisoft EK missed, zemana detected)

therefore, I'm totally not convinced by WD's results in various labs. They are comparing whitelisting (BAFS and smartscreen) to blacklisting solutions
The role of the SmartScreen in the tests is unclear to me, too. If the samples are downloaded from FAT32 USB drives, then SmartScreen will ignore them, and the results for WD will be lower.
Maybe all the AVs could be tested with SmartScreen ON? SmartScreen is the Windows feature, which works even with disabled WD. But, the AV vendors won't probably like this idea, because of false positives.
Yet, the BAFS is not whitelisting - it is more like AI blacklisting in the cloud for very suspicious files. Nothing special, most of advanced AVs use something like that.
 

Evjl's Rain

Level 47
Verified
Honorary Member
Top Poster
Content Creator
Malware Hunter
Apr 18, 2016
3,684
Yet, the BAFS is not whitelisting - it is more like AI blacklisting in the cloud for very suspicious files. Nothing special, most of advanced AVs use something like that.
thank you for your reply but I do think it's whitelisting or something similar because of these high number FPs
without BAFS, WD should produce similar results to the hub or its old results from MRG-effitas
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
thank you for your reply but I do think it's whitelisting or something similar because of these high number FPs
without BAFS, WD should produce similar results to the hub or its old results from MRG-effitas
It is totally different from whitelisting, because it is blacklisting.:giggle:
"When Windows Defender Antivirus encounters a suspicious but undetected file, it queries our cloud protection backend. The cloud backend applies heuristics, machine learning, and automated analysis of the file to determine whether the files are malicious or clean."
Enable Block at First Sight to detect malware in seconds

So, the suspicious but undetected file is not checked against a kind of whitelist (like SmartScreen Application Reputation).
BAFS can produce false positives, because the blacklisting in done by AI. You can see this on VirusTotal, just see how many false positives have some AVs based on the AI detection.(y)
If you will submit the false positive to Microsoft, then the whitelisting signature is created for WD, and the file is considered as detected but clean (excluded from BAFS AI). Such whitelisting, cannot produce the false positives.
If the file (undetected by signatures) flagged in BAFS as malicious is executed by someone, then it is first checked in the cloud blacklist, and immediately blocked (this can produce false positives).

Edit.
There is one thing that can be slightly similar to whitelisting. The execution of the suspicious but undetected file is suspended for some seconds, by default (but not totally blocked). If the AI thinks too long or makes the wrong decision, then the malware will be executed, anyway. Yet, another user who will try to execute the same malware later, will be safe in most cases.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
It can seem strange why developers and AV vendors cannot make an agreement. There are much more malware executables than legal executables. So, it should be easier to whitelist the legal executables than blacklist the malware. If every developer has submitted his/her executables for whitelisting, then something like BAFS could simply work on the base of the whitelist. :giggle:

For the customers, it would be much better if there were not many AV vendors, but instead, many cooperative Whitelisting vendors and some AV vendors. But, this would be probably not easy for the developers, because they would be forced to pay for whitelisting or the customers would be forced to pay for it (like they already pay for the AV or the Internet connection).:unsure::(
The AVs could focus then on protecting the whitelisting bypass attacks.
 
Last edited:

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top