AV Comparatives July 2017

Transhumana

Level 6
Verified
Well-known
Jul 6, 2017
271
Same place Kaspersky and Microsoft :D:D

Nah, this is a way of AV-comparatives trying to mediate between them and tell "MS, I see you burying those license expiration notifications! And stop all that trash-talking about Kaspersky! Kaspersky - drop that complaint right now, Mister! Come on boys, why do you have to fight, eh? You're both equally valuable members of PC security society, now shake your hands like big boys, cease the fire and call a truce.". :D:D:D
 

Pat MacKnife

Level 15
Verified
Top Poster
Well-known
Jul 14, 2015
731
Last edited:

budda

Level 2
Verified
Feb 13, 2013
89
Let's just say Panda on Windows 10 is a tad better than Windows 95 without it. :D (Yes, I trust Panda, can you tell? :p)
I can only imagine they tested it against empty .txt files while they threw everything they had against Kaspersky.

well i was talking about free av not paid and it was better than Microsoft and some other av's like mcafee.
 
  • Like
Reactions: ispx

Fritz

Level 11
Verified
Top Poster
Well-known
Sep 28, 2015
543
well i was talking about free av not paid and it was better than Microsoft and some other av's like mcafee.

Well, better than M$ and McAfee is a given in my book. Also, please refer to the thoughtfully placed smilies, indicating the distinct possibility of a slight humoristic exaggeration on my part.
 
  • Like
Reactions: frogboy

Faybert

Level 24
Verified
Top Poster
Well-known
Jan 8, 2017
1,318
BullGuard continues to improve in protection, meanwhile the big problem of F-Secure with false positives continues, Emsisoft in the same way.
 

Fritz

Level 11
Verified
Top Poster
Well-known
Sep 28, 2015
543
the big problem of F-Secure with false positives continues, Emsisoft in the same way.
I don't mean to be the standard, but neither F-Secure nor Emsisoft bother me with a crazy amount of FPs. That's for a guy running a boatload of weird code concoctions and rarely seen software. No idea where they get all those FPs from. *shrugs*
 
  • Like
Reactions: Rebsat and frogboy

legendcampos

Level 6
Thread author
Verified
Aug 22, 2014
286
I also saw the 12 Emsisoft False Positives...;)
Second worse Ranking; just behind F-Secure which had 27. :eek:

Most of them must be Pup's, because they are programs "reliable" but built into other software, I consider malicious.
 

Lord Ami

Level 21
Verified
Honorary Member
Top Poster
Malware Hunter
Well-known
Sep 14, 2014
1,026
From my experience of running F-Secure (running it right now):
FPs are minimal, but I can say there are more of them (vs Avast for example). Not so much false detections per-se, but DeepGuards sometimes blocks "rareish" software from running (Reason: Rare application). Been submitting them and they get fixed quite fast. Can't complain.

For me it seems that F-Secure is not that popular among casual users to populate cloud with lots of data. Recently newest Snappy Driver Installer Origin was blocked from running after update. Which is to be expected, however, these are the only "FPs" I'm getting. No false detections like "Trojan.Gen.Cryptic" or such. Great software!

Oh and FPs on website department: For example I had FP with trying (on VM) 360 Total Security - download link was blocked by Freedome VPN (and FS Protection). So... these kind of random FPs seem to happen. IIRC, txt.do was also blocked (until I submitted it).

NB: This is all coming from person who kinda is not "set and forget" user. I test and try software from time to time. All in all, can't complain on F-Secure false positives!
 
  • Like
Reactions: Fritz and DJ Panda

kamla5abi

Level 4
Verified
May 15, 2017
178
AV tests are a specific type of measuring stick. I think more useful to the publisher than the end-user.
In the case of AV-Comparative tests, most people just see a bar graph and use it alone to pass judgment. Therein lies a big problem.
This is 100% the truth... random people who find these results via google searches and are just glancing at these results in bar graph (especially bar graphs with scales that DONT go from 0-100...) are likely to misinterpret the actual data...

case in point, below in my response to quotes of user @212eta

as a side:
i also like how @212eta attaches a pic to his response which is zoomed in from 90-100%...:p
so that a 1% difference (for example) LOOKS like such a huge difference to a user who quickly glances at the bar graph and makes a hasty decision by it...
in this case, 329 total test cases were "tested"... so 1% difference is 3.29 test cases...but the bar graph posted makes it look like such a huge difference between left side and right side...:(

View attachment 163380
AV-Comparatives - Independent Tests of Anti-Virus Software - Real World Protection Test Overview
Emsisoft
ranked Last.
-User Dependent: 3.6%
-False Positives: 12 (Second worse; just behind F-Secure which had 27.)
According to this user, Emsisoft ranked last...
HOWEVER....
how much RED do you see in the bar graph people??
By AVC's own definition: Red means compromised; Yellow means user dependent (meaning, user gets popup asking what to do....) but AVC makes it look like thats still a "fail" for the product by the way they represent the data.... thats called BIAS in research world....
Emsisoft: Green= 96.4 Yellow= 3.6 Red= 0.... 96.4 + 3.6 = 100. So ZERO % compromised. But AVC's bar graph makes Emsisoft look like they ranked last...:rolleyes:

remember that ANY popup asking the user what to do means yellow for AVC...which they misrepresent as a "sort of compromised" and count 50% of that result towards the total overall protection percentage when they do their biannual results publication which is cumulative.... does that make any sense?? o_O The AV software is asking the user "do you want to allow or not?" and AVC assumes the user is an idiot drone and says "sure" exactly 50% of the time, then says "nope" the other 50% of the time.....

Someone wrote a post about making "Assumptions".... seems like AVC is ALSO making assumptions, no?
;)
I also saw the 12 Emsisoft False Positives...;)
Second worse Ranking; just behind F-Secure which had 27. :eek:
now to address your false positive remark
From AVC's Feb to June 2017 cumulative report:
"In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples are used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products1 . Our tests use more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average."

what does this mean?
it is more important statistically to note the different result clusters with respect to false products and the overall industry average
Some months are good for some products in terms of false positives, and other months are better for other products...

My point:
look at the chart of 2017 feb-june results...
I will conveniently also zoom into 90-100 zone like you did to show other users how this particularly zoomed in view misrepresents what the data says and what people quickly glancing at a bar graph "see"....
AV-Comparatives - Independent Tests of Anti-Virus Software - Real World Protection Test Overview

who has the highest numbers of false positives spread out over a longer period of time and many more test cases? And this is just one stretch of multiple months report to show my point...I'm sure the results are different in other periods of time...
1) F-Secure 219 FP
2) McAfee 99 FP
3) Seqrite 59 FP

Emsisoft has 27 total FP out of 1955 test cases...
when you look at THOSE numbers, does Emsisoft STILL look like in last place to you?? :rolleyes::p
i dont think so...right? ;)
Panda must have shelled a lot of money to these testing sites.
also do you see any red on the Emsisoft bar...
WD and Kaspersky at the same place, hahahahha
lol good to see someone else brought this point up too :)
user action popup prompt doesnt mean it was compromised...
just means emsisoft gives more decision making prompts to the user
maybe thats their approach? maybe they prefer not to block "unknown" things or whatever and let the user decide what to do...instead of just auto blocking and having the user get annoyed at the product... different companies have different approaches and philosophies on how they want their product to act. That doesn't mean those popups asking for user input means half users will allow and half users will block...and therefore that should be deemed as a "negative" for that specific product...

Most of them must be Pup's, because they are programs "reliable" but built into other software, I consider malicious.
yes this could be the case too. I havent looked up if AVC counts PUPs as a "fail" or not...but if they do, then i would argue they are misrepresenting and skewing results... PUP results should be reported separately from "pass" or "fail" from malware... since PUP doesnt equal malware...
 

kamla5abi

Level 4
Verified
May 15, 2017
178
Okay, I look at these tests just for fun, I see Emsisoft is the last one but using it for years and with the real proof of the facts, I can guarantee you that this result is totally wrong.
i agree
by my personal usage here and there to try it out over time, emsisoft is NOT as weak of a product as sometimes these tests try to show...
yes it leaves more decisions up to the user (yellow region in ALL of AVC graphs means user prompted for what to do) but maybe thats just the approach that emsisoft wants to take?? maybe they are targetting a user base which is more towards the "techie" side of the spectrum instead of "average" users who i guess don't want to be disturbed by their AV ever...
 
5

509322

This is 100% the truth... random people who find these results via google searches and are just glancing at these results in bar graph (especially bar graphs with scales that DONT go from 0-100...) are likely to misinterpret the actual data...

case in point, below in my response to quotes of user @212eta

as a side:
i also like how @212eta attaches a pic to his response which is zoomed in from 90-100%...:p
so that a 1% difference (for example) LOOKS like such a huge difference to a user who quickly glances at the bar graph and makes a hasty decision by it...
in this case, 329 total test cases were "tested"... so 1% difference is 3.29 test cases...but the bar graph posted makes it look like such a huge difference between left side and right side...:(


According to this user, Emsisoft ranked last...
HOWEVER....
how much RED do you see in the bar graph people??
By AVC's own definition: Red means compromised; Yellow means user dependent (meaning, user gets popup asking what to do....) but AVC makes it look like thats still a "fail" for the product by the way they represent the data.... thats called BIAS in research world....
Emsisoft: Green= 96.4 Yellow= 3.6 Red= 0.... 96.4 + 3.6 = 100. So ZERO % compromised. But AVC's bar graph makes Emsisoft look like they ranked last...:rolleyes:

remember that ANY popup asking the user what to do means yellow for AVC...which they misrepresent as a "sort of compromised" and count 50% of that result towards the total overall protection percentage when they do their biannual results publication which is cumulative.... does that make any sense?? o_O The AV software is asking the user "do you want to allow or not?" and AVC assumes the user is an idiot drone and says "sure" exactly 50% of the time, then says "nope" the other 50% of the time.....

Someone wrote a post about making "Assumptions".... seems like AVC is ALSO making assumptions, no?
;)

now to address your false positive remark
From AVC's Feb to June 2017 cumulative report:
"In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples are used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products1 . Our tests use more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average."

what does this mean?
it is more important statistically to note the different result clusters with respect to false products and the overall industry average
Some months are good for some products in terms of false positives, and other months are better for other products...

My point:
look at the chart of 2017 feb-june results...
I will conveniently also zoom into 90-100 zone like you did to show other users how this particularly zoomed in view misrepresents what the data says and what people quickly glancing at a bar graph "see"....
AV-Comparatives - Independent Tests of Anti-Virus Software - Real World Protection Test Overview

who has the highest numbers of false positives spread out over a longer period of time and many more test cases? And this is just one stretch of multiple months report to show my point...I'm sure the results are different in other periods of time...
1) F-Secure 219 FP
2) McAfee 99 FP
3) Seqrite 59 FP

Emsisoft has 27 total FP out of 1955 test cases...
when you look at THOSE numbers, does Emsisoft STILL look like in last place to you?? :rolleyes::p
i dont think so...right? ;)

lol good to see someone else brought this point up too :)
user action popup prompt doesnt mean it was compromised...
just means emsisoft gives more decision making prompts to the user
maybe thats their approach? maybe they prefer not to block "unknown" things or whatever and let the user decide what to do...instead of just auto blocking and having the user get annoyed at the product... different companies have different approaches and philosophies on how they want their product to act. That doesn't mean those popups asking for user input means half users will allow and half users will block...and therefore that should be deemed as a "negative" for that specific product...


yes this could be the case too. I havent looked up if AVC counts PUPs as a "fail" or not...but if they do, then i would argue they are misrepresenting and skewing results... PUP results should be reported separately from "pass" or "fail" from malware... since PUP doesnt equal malware...

Despite what others might state, the numbers show all products are decent. Some have always been standouts and the astute know which products those are. When you start to dissect the differences it can get to the point where you are comparing fraction of a % differences.
 

kamla5abi

Level 4
Verified
May 15, 2017
178
image.jpg
funny you mention this "assumption" quote...because: (below)
I love to see how people trust any crappy, youtuber, or home made test with bad quality, variety and low volume of samples and can't trust a test properly made and documented
the problem (and why users should take these "lab tests" with a grain of salt) is that the general public doesnt see any of these tests nor do we know which samples were run or how old they are... samples run in June 2017 could be from Jan 2017....or some other problematic things...
There is a proper method that scholarly or academic research is conducted, and though these labs "try" to emulate it to some degree, they are not quite there IMO...
we have to assume they are legit tests on legit samples that are the same for every software...and other assumptions are made too by the lab

at least the youtube testers show the actual test from start to finish and show which samples are used and which options are used per product (usually) so we can decide if the test and results are worth anything or not....

there are youtube "testing" videos where the tester launches 10 or more samples all at the same time (wow...really??o_O) and doesnt take the time to verify and show that the system is completely clean after running each sample :rolleyes:...they just see if their files are encrypted/deleted/whatever after like 20 seconds or something and dont bother checking if the malware sample dropped any files elsewhere on the system...which we obviously know ISNT the way to show if some sample was properly blocked or not:p

I am not saying ALL those youtube tests are worthy of the "results" they claim either... ;)
 
  • Like
Reactions: AnonMan and Fritz

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top