Q&A Why I think testing "labs" are useless

McMcbrad

Level 23
Oct 16, 2020
1,252
Quite frankly, this is where I'm situated, except I consider the AV tests and rabid fan user posts as novelty entertainment only, if I even bother to read them. K is fine, BD is fine, ESET is fine, WD is fine, they're all fine. Does it matter whether one of them scored .1% better than the rest in a sudden barrage of malware that won't happen in the real world? Of course not. But I've seen people switch AVs based on how they scored this cycle on some AV lab's "test". It's insane.
All these tests are designed merely as a guide, but due to melting differences now (not like long time ago with Symantec:98.9%, McAfee: 81.6%) they can no longer be even that.
It's like when you buy a fridge, you have volume in litres, but you also have other factors to consider.

When considering which software will be used in the long run, there are factors, such as performance, lack of issues, alerts frequency/design and others. I personally wouldn't use an antivirus slowing me down or bothering me every 10 minutes, even if it was triple-engine with Kaspersky, Bitdefender and Avast.
 

mlnevese

Level 23
Verified
May 3, 2015
1,212
I have been saying something similar for a few years now. All top brands offer similar protection, so choose the ones that don't create any problem, do not slow you
down, that you are comfortable with the interface. Finally, the one that is cheaper.

All top brands have hidden bugs that are not evident at first. So, in an ideal situation, use their trial period to the last day. Use your computer as you would every day. Notice if the web pages you visit have any trouble loading. Do you have problems accessing anything in your network? Can you print to your wireless printer or scan from your wireless scanner?

While using the trial do not go throwing thousands of malwares at the software. Eventually some product will miss a sample and infect your machine for good. Also keep in mind that product A will eventually detect sample X while product B won't. This doesn't make product A better than B. You may bet there is a sample Y out there that product B detects, and A doesn't.
 
Last edited:

MacDefender

Level 14
Verified
Oct 13, 2019
639
All these tests are designed merely as a guide, but due to melting differences now (not like long time ago with Symantec:98.9%, McAfee: 81.6%) they can no longer be even that.
It's like when you buy a fridge, you have volume in litres, but you also have other factors to consider.

When considering which software will be used in the long run, there are factors, such as performance, lack of issues, alerts frequency/design and others. I personally wouldn't use an antivirus slowing me down or bothering me every 10 minutes, even if it was triple-engine with Kaspersky, Bitdefender and Avast.
I like it when the tests reveal enough information about their samples that we can draw meaningful real world conclusions.

For example, years ago this community was one of the first to start noting how BitDefender’s signatures lagged for zero day protection. That drove the intuition that if you choose a BD product it needs to be combined with something else that protects against emerging threats — whether that’s a second engine or a dynamic behavior blocker.

Even recently, @harlan4096’s tests showed that F-Secure had repeated blind spots to both scriptors as well as broad whitelisting of legitimate processes that could be abused (Python, Node.JS, etc).

When a test just says that product A detected 490 out of 500 threats and product B detected 480 out of 500, I think it’s crazy that people switch AV software based off those kinds of results. In a lot of the lab tests there simply is not enough spread between the products and the tests exaggerate slight differences, less than a few percent detection differences. Worse, other labs conflate non-protection scores in with the overall score (ease of use, false positive on a largely irrelevant battery of software, etc)
 

McMcbrad

Level 23
Oct 16, 2020
1,252
Even recently, @harlan4096’s tests showed that F-Secure had repeated blind spots to both scriptors as well as broad whitelisting of legitimate processes that could be abused (Python, Node.JS, etc).
Same like my test unfortunately.

For example, years ago this community was one of the first to start noting how BitDefender’s signatures lagged for zero day protection.
You can only go that far relying on signatures. Then the behavioural blocker was amazing til it had a slider to adjust the aggressiveness, but it didn’t take long for them to kill that, as well as the IDS adjustment.

I also don’t trust the performance test anymore that puts product A on the first place, product B last. From my tests I saw how product that has been deemed one of the heaviest, can actually be the best one for a very old laptop. I also saw how a product amongst the first might be an overload for my SSD.
 

monkeylove

Level 5
Mar 9, 2014
230
In terms of protection, tests are very helpful given the costs of security programs.

For performance, YMMV. One quick test involves using benchmark software after installing a program. The catch is that one will have to make sure that the program is uninstalled correctly after testing, which may mean using any uninstallers or cleaners provided by the company.

Finally, one reason for switching is that the performance sometimes goes down due to changes in the operating system and/or the security program.
 

mlnevese

Level 23
Verified
May 3, 2015
1,212
As far as I'm concerned the best way to test performance is to install it in your machine and use it. If you can't feel the difference, then it's light. A 0,1ms diference to load a webpage, for instance, doesn't bother me or make the product "heavier".

Don't even mention RAM. High RAM usage does not in any way qualify a product as heavy.
 
Top