[AV-TEST] Windows 10, Home User, October 2017

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
Do you happen to know why they left AV-TEST? I didn't know about it but the reason could be handy.
They probably disliked the AV-TEST testing procedure, that prefers AVs without user dependent actions, which seems to be the proper way for the average users. The same is true for Microsoft, because of SmartScreen Application Reputation.
 
D

Deleted member 65228

Money. My personal opinion is that it comes down to money.
Hmmm I don't know, AFAIK all the vendors pay the same amount of money for the public tests. Some companies like VirusBulletin will test privately for free and provide advice but the cost for public results should be the same among all products involved in the same test.

About McAfee, I have 0 clue. I'd rather use a rogue AV hahahaha
 

amico81

Level 21
Verified
Top Poster
Well-known
Jan 10, 2017
1,061
good results from independent av-test labs are the best marketing. Every company need good marketing. The test procedures are arguable. I miss more transparency....wich sample was blocked, etc. They could learn from our malware hub :cool:
 

Tsiehshi

Level 2
Verified
Nov 11, 2017
51
I find AVT vastly overrated. Almost any decent product gets full scores on non-zero-day file detection tests, and the zero-day ones aren't too challenging for them either. I also remember G Data and Symantec once hypocritically pulling out of AVC because they supposedly disagreed with the methodology despite keeping being tested by AVT and its lower-quality tests. :ROFLMAO:
 
Last edited:
D

Deleted member 65228

Yeah I agree @amico81 ^^ About the YouTube reviews, they won't get as much attention. Average users won't be watching them, god bless them if they have to suffer through watching some of those testing channels where a school kid goes through a URL and then blatantly says if a product is good or bad. If a vendor hates companies like AV-C/AV-TEST then for sure they won't like those YT reviews methodology

The malware hub is good but remember that some samples are not always malicious, that's why some threads say some of them might be clean. And configurations could be slightly changed which can affect protection. Not to mention that when the sample is submitted to VirusTotal before a thread is made (or Hybrid-Analysis auto-comments a malicious implying score), a vendor could have already processed and added when in actual fact it wouldn't have flagged if it wasn't for that. And many other conditions.. one I heard about apparently some vendor would just auto flag new VT submissions as suspicious until it was manually checked if another vendor already flagged it (may not have been true but I think I heard it from an actual vendor employee LOL)

If we think about malicious URLs, they can go down quite quickly. By the time it gets shared the author may have been paranoid and taken it down already after X amount of views, and then some vendors might not flag because they may have flagged based on heuristic analysis which can't be done now as the website is down = not blocked but it may have blocked if it was working, too

Personally I like VirusBulletin and the malware hub testing, but I still take them both with a grain of salt because it isn't always as it seems and every vendor has good/bad days. I've seen vendors be awful for years and then put their foot down with work and rocket past other solutions on a majority basis and vice versa
 
D

Deleted member 65228

not blocked but it may have blocked if it was working, too
e.g. a phishing website. Heuristic analysis could be as simple as scanning how large the HTML code-base is for the webpage and checking for stolen copyrights from genuine websites like PayPal, Google, Microsoft, Dropbox, etc -> compare URL and certificate = its stolen copyright = add to scoring system

maybe check for stolen images as well after the above one = add to scoring system

code-base is small and it infringes by stealing copyright notices on an unofficial page = add to scoring system -> PayPal have so much code for their HTML alone let alone JS scripts so a page stealing their copyright or titles/descriptions with a few hundred lines is pretty much guaranteed a rogue copy

are there elements for entering details? = add to scoring system (e.g. text fields)

= scoring system may determine a flag or not.

Or it could be/including JS code analysis... -> logging keystrokes with JS APIs? general source code analysis. Could even attempt to identify obfuscation as a potential indicator and then try to "beautify" it

But if the page is already not working or the author has changed it to conceal what was previously there after X amount of views/victims = now it isn't blocked but its still deemed malicious and claimed to not be blocked by the product even though the harmful content isn't no longer available.

Not always the case but I've spectated this happen a few times and it can look bad for vendors with detection's even if it isn't their fault
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
There is the problem to properly test the Defender's most valuable feature Block at first seen (or Block at first sight) on Windows 10. When the tester run malware samples that are 0-day, some of them will infect the computer with Defender, more frequently as compared to other good AVs. That is true for that one computer, and most malware tests will show that. But it is not evident that after the infection of that one computer, in many cases, the rest computers in the world with installed Defender (and Internet connection) will be protected in 5 minutes! So, the probability that the concrete user will be infected by 0-day malware, is lower than when looking at AV test results. Generally, the simple statistics adopted in most AV tests on 0-day AV protection is incorrect (due to using clouds), and has nothing to do with real world scenario.
Due to Block at first seen feature, in the real world the Defender protection for home users is as good as any good AV.(y) That is true that other AVs also uses clouds, but Microsoft AI integrated with the cloud is the best for now.
Still, it does not mean that using Kaspersky, BitDefender, etc., is meaningless. They have many other pros. Also, on some computers, Defender can be very irritating.
.
Edit
'Block at first seen' feature is not so useful against targetted attacks, so in Enterprises the good AVs could have an advantage over Defender. So, Microsoft developed a special paid solution named Advanced Threat Protection. I do not know if it is as efficient as other best AV solutions.
 
Last edited:

Arequire

Level 29
Verified
Top Poster
Content Creator
Feb 10, 2017
1,822
Can you blame them, Emsisoft is not stupid they opted out of that testing lab :p lol
I think @Jack should ban results from other labs and only allow results from MT community testers :)
I don't much care for a ban on the lab tests but I think that the labs themselves should make it clearer that the results only apply to the samples used in the test and are not indicative of how they'd perform against malware you could potentially run into on the web. Same should apply to publications who report on said results and those who do their own testing like Youtubers, PCMag, etc.
 
D

Deleted member 65228

the results only apply to the samples used in the test and are not indicative of how they'd perform against malware you could potentially run into on the web.
The results do typically contain notices that the performance (such as detection) is exclusive to the test. If you check the reports for AV-C you'll see their notices stating that products that detect 100% do not actually detect 100% of malicious software, but only 100% of the samples used in the test.

As well as this, the samples used in the tests will be malware you could potentially run into on the web. They don't make a few hundred to several thousand malware samples themselves, they use real-world samples. Whether those samples are targeted to home users, businesses or both is irrelevant... Malicious software is malicious software and the products are tested with malicious software to see how well they perform during the tests.

Same should apply to publications who report on said results and those who do their own testing like Youtubers, PCMag, etc.
YouTube tests are more opinionated than factual because a lot of the time the samples are not actually verified to be malicious. A few detection's on VirusTotal or a high score on Hybrid-Analysis doesn't verify it; I've seen clean software hit the very high marks for VirusTotal due to generic detection's causing false positives or on Hybrid-Analysis (H-A/other sandboxing reports need to be checked by researchers with experience to understand what malicious is and isn't benign).

The tests conducted by the testing companies such as VirusBulletin, AV-Comparatives, AV-TEST and many others are based on facts. A vendor flagged X amount of samples or it didn't for on-demand scanning and in real-time, as well as URL flagging checks. It isn't unknown that the results from tests are not representations of how a product will perform all the time, but typically speaking vendors that usually come top in such tests like Bitdefender and Avira do typically have amazing signatures and hence this is why many other vendors such as Emsisoft, F-Secure, Qihoo and IObit rely on such SDKs provided by those vendors for dual/triple (or more) engine combinations.

Vendors have good and bad days, testers at these companies do testing as a career and work at said companies because they have experience, and review articles are based on facts established from the tests (and usually opinions presented with them). Banning reports of testing labs would be really silly and a useless censorship. If people don't agree with them or trust the testing lab then just don't read the reports or care for them, there's no need to ban them...

There's even rumours that a vendor pays more to get better results. Not true unless the testing lab is unethical and don't know what they are doing. I'm pretty sure vendors pay the same amount of money to have their products results published as the other vendors conducted in the same tests and also having their results published for it.

At the end of the day, attacks are doubling/tripling each year and thousands new samples arise each week so a product detecting full or nothing isn't a representation of how it will always perform. Sure an Average Joe who doesn't know anything might not see the disclaimers about results but that is down to them to read reports properly before assuming. None of the professional popular labs state "A product always has 100% detection" and what-not. Vendors do marketing tricks sometimes like "99% detection" if they have a proper award from a test where they scored full but product providers for non-technology services use their various awards and feedback for all sorts of other marketing tricks so this stuff is nothing new.

Each to their own but that is just what I think. I do see where you're coming from though
 
D

Deleted member 65228

What are your opinion for F Secure?
They are quite good I think but they have a tad too many false positive detection's sometimes... They have a dual engine scanner because they combine Bitdefender signatures with their own (Bitdefender come top a lot of the time in tests because they have great focus on good signatures - many vendors use their SDK for this reason to boost themselves up to the top quicker) and they focused on dynamic a lot a few years ago and their hard work is finally paying off in amazing ways (their DeepGuard component is really good).

They also dipped themselves into anti-rootkit by trying new ways to detect hidden processes and what-not. Some of the things F-Secure have done were extremely creative in this area... They once took advantage of logged data within csrss.exe to catch out hidden processes which was a really brilliant idea

They aren't perfect because no vendor is but they do try hard and they provide good work. I don't use them (never have properly) and can never see myself doing so because my needs are different but many people use them because they provide quality content

They remind me of Norton which is a good thing because Norton are spectacular (I neither use Norton but I believe they are good and they likely have one of the biggest malware intelligence DBs in the world)
 
D

Deleted member 65228

I mean in my opinion zero-day protection is usually best with dynamic because 0-day malware may surpass signatures if they use techniques to bypass static scanning methods (e.g. packing/obfuscation) (and if they are more complex, evade memory scanning -> e.g. wait until emulation checks is over, page guard exceptions, etc.). Vendors like Avast, AVG, Emsisoft, F-Secure and Kaspersky focus a lot on dynamic. Others like ESET have HIPS which is more user intervention so less convenient for an average joe... Comodo have a brilliant sandbox

ESET for example may not have a full blown BB but do you want to know what they DO have that other vendors don't AFAIK? They have a UEFI bootkit scanner. How more low level and complex do you want them to get?

Just take all the lab test results with a grain of salt and don't assume a product here or there will always perform like that and you'll be fine. Just pick what is comfortable for you and meets your own needs... problem solved

in my opinion, signatures are sort of obsolete for new complex attacks but signatures are still a useful addition. Static + Dynamic = good in my view
 

Paul.R

Level 17
Verified
Well-known
May 16, 2013
844
I prefer to see antivirus tested on Youtube by someone or on malwarehub then see on this av-moneYTest sites. In my opinion if you put half on those antivirus vs. nonpetya ransomware those antivirus will fall.
 

boredog

Level 9
Verified
Jul 5, 2016
416
And it isn't fair to use default mode. Comodo with extra settings would have scored much higher. Also when Defender is being tested by itself, that is not fair either.
It should be combined with the rest of Windows security such as exploit guard, smart screen and UAC set to max. None of the samples would have worked unless the user clicked allow.
 

In2an3_PpG

Level 18
Thread author
Verified
Top Poster
Content Creator
Well-known
Nov 15, 2016
867
And it isn't fair to use default mode. Comodo with extra settings would have scored much higher. Also when Defender is being tested by itself, that is not fair either.
It should be combined with the rest of Windows security such as exploit guard, smart screen and UAC set to max. None of the samples would have worked unless the user clicked allow.

I believe they mostly leave them at default due to the average Joe not playing around with settings too much. They most likely wont know what to do with most of the settings nor would they honestly care. Average people will want an application that just works out of the box. That's why software like Comodo is left for people like us that like tweaking settings.
 
  • Like
Reactions: Andy Ful

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top