- Jun 21, 2017
- 616
I love to see how people trust any crappy, youtuber, or home made test with bad quality, variety and low volume of samples and can't trust a test properly made and documented
truly wise words have been spoken, +1 for that
I love to see how people trust any crappy, youtuber, or home made test with bad quality, variety and low volume of samples and can't trust a test properly made and documented
Same place Kaspersky and Microsoft
@Transhumana,
on behalf of MalwareTips, allow me to welcome the unofficial UN mediator to our community a warm welcome to you ma'am
Nah, that's what AV-comparatives said, not me. I'm not that good with mediation. All I did was a simple interpretation.
i know, i know on a more serious note TM, what you wrote could just be true
Let's just say Panda on Windows 10 is a tad better than Windows 95 without it. (Yes, I trust Panda, can you tell? )
I can only imagine they tested it against empty .txt files while they threw everything they had against Kaspersky.
I also saw the 12 Emsisoft False Positives...do you see any red on the Emsisoft bar?
well i was talking about free av not paid and it was better than Microsoft and some other av's like mcafee.
I don't mean to be the standard, but neither F-Secure nor Emsisoft bother me with a crazy amount of FPs. That's for a guy running a boatload of weird code concoctions and rarely seen software. No idea where they get all those FPs from. *shrugs*the big problem of F-Secure with false positives continues, Emsisoft in the same way.
I also saw the 12 Emsisoft False Positives...
Second worse Ranking; just behind F-Secure which had 27.
you're running out of bullets bro.I also saw the 12 Emsisoft False Positives...
Second worse Ranking; just behind F-Secure which had 27.
This is 100% the truth... random people who find these results via google searches and are just glancing at these results in bar graph (especially bar graphs with scales that DONT go from 0-100...) are likely to misinterpret the actual data...AV tests are a specific type of measuring stick. I think more useful to the publisher than the end-user.
In the case of AV-Comparative tests, most people just see a bar graph and use it alone to pass judgment. Therein lies a big problem.
According to this user, Emsisoft ranked last...View attachment 163380
AV-Comparatives - Independent Tests of Anti-Virus Software - Real World Protection Test Overview
Emsisoft ranked Last.
-User Dependent: 3.6%
-False Positives: 12 (Second worse; just behind F-Secure which had 27.)
now to address your false positive remarkI also saw the 12 Emsisoft False Positives...
Second worse Ranking; just behind F-Secure which had 27.
lol good to see someone else brought this point up tooPanda must have shelled a lot of money to these testing sites.
also do you see any red on the Emsisoft bar...
WD and Kaspersky at the same place, hahahahha
yes this could be the case too. I havent looked up if AVC counts PUPs as a "fail" or not...but if they do, then i would argue they are misrepresenting and skewing results... PUP results should be reported separately from "pass" or "fail" from malware... since PUP doesnt equal malware...Most of them must be Pup's, because they are programs "reliable" but built into other software, I consider malicious.
i agreeOkay, I look at these tests just for fun, I see Emsisoft is the last one but using it for years and with the real proof of the facts, I can guarantee you that this result is totally wrong.
This is 100% the truth... random people who find these results via google searches and are just glancing at these results in bar graph (especially bar graphs with scales that DONT go from 0-100...) are likely to misinterpret the actual data...
case in point, below in my response to quotes of user @212eta
as a side:
i also like how @212eta attaches a pic to his response which is zoomed in from 90-100%...
so that a 1% difference (for example) LOOKS like such a huge difference to a user who quickly glances at the bar graph and makes a hasty decision by it...
in this case, 329 total test cases were "tested"... so 1% difference is 3.29 test cases...but the bar graph posted makes it look like such a huge difference between left side and right side...
According to this user, Emsisoft ranked last...
HOWEVER....
how much RED do you see in the bar graph people??
By AVC's own definition: Red means compromised; Yellow means user dependent (meaning, user gets popup asking what to do....) but AVC makes it look like thats still a "fail" for the product by the way they represent the data.... thats called BIAS in research world....
Emsisoft: Green= 96.4 Yellow= 3.6 Red= 0.... 96.4 + 3.6 = 100. So ZERO % compromised. But AVC's bar graph makes Emsisoft look like they ranked last...
remember that ANY popup asking the user what to do means yellow for AVC...which they misrepresent as a "sort of compromised" and count 50% of that result towards the total overall protection percentage when they do their biannual results publication which is cumulative.... does that make any sense?? The AV software is asking the user "do you want to allow or not?" and AVC assumes the user is an idiot drone and says "sure" exactly 50% of the time, then says "nope" the other 50% of the time.....
Someone wrote a post about making "Assumptions".... seems like AVC is ALSO making assumptions, no?
now to address your false positive remark
From AVC's Feb to June 2017 cumulative report:
"In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples are used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products1 . Our tests use more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average."
what does this mean?
it is more important statistically to note the different result clusters with respect to false products and the overall industry average
Some months are good for some products in terms of false positives, and other months are better for other products...
My point:
look at the chart of 2017 feb-june results...
I will conveniently also zoom into 90-100 zone like you did to show other users how this particularly zoomed in view misrepresents what the data says and what people quickly glancing at a bar graph "see"....
AV-Comparatives - Independent Tests of Anti-Virus Software - Real World Protection Test Overview
who has the highest numbers of false positives spread out over a longer period of time and many more test cases? And this is just one stretch of multiple months report to show my point...I'm sure the results are different in other periods of time...
1) F-Secure 219 FP
2) McAfee 99 FP
3) Seqrite 59 FP
Emsisoft has 27 total FP out of 1955 test cases...
when you look at THOSE numbers, does Emsisoft STILL look like in last place to you??
i dont think so...right?
lol good to see someone else brought this point up too
user action popup prompt doesnt mean it was compromised...
just means emsisoft gives more decision making prompts to the user
maybe thats their approach? maybe they prefer not to block "unknown" things or whatever and let the user decide what to do...instead of just auto blocking and having the user get annoyed at the product... different companies have different approaches and philosophies on how they want their product to act. That doesn't mean those popups asking for user input means half users will allow and half users will block...and therefore that should be deemed as a "negative" for that specific product...
yes this could be the case too. I havent looked up if AVC counts PUPs as a "fail" or not...but if they do, then i would argue they are misrepresenting and skewing results... PUP results should be reported separately from "pass" or "fail" from malware... since PUP doesnt equal malware...
funny you mention this "assumption" quote...because: (below)
the problem (and why users should take these "lab tests" with a grain of salt) is that the general public doesnt see any of these tests nor do we know which samples were run or how old they are... samples run in June 2017 could be from Jan 2017....or some other problematic things...I love to see how people trust any crappy, youtuber, or home made test with bad quality, variety and low volume of samples and can't trust a test properly made and documented