5
509322
Thread author
Of course you are correct, being a huge malware DB (there are millions of malware variants), who create these tests has to choose a limited number of malware in testing, but statistically representative: new threats and known threats.
But the selection criteria of the various categories are extremely complex.
For example, it is possible to test an antivirus against a higher number of threats, even if in reality they are not very popular and therefore there is little probability that the real user can become infected. Or you can select the threats, which constitute the majority of the causes of infections for common users.
Also by understanding what are the potentially most viral threats may change scenario due to many variables, for example, there are huge differences in the spread of infections on geographical level.
The tests can vary a lot by depending on the parameters chosen by who performs the test, and thus the results may change when you are testing the same malware in different tests.
Frequently it happens that an antivirus program will obtain an excellent result in a test, and mediocre in others.
So these tests may show a statistically correct evaluation of a product, but when the user sees the full green line in the chart with 100% detections, the subliminal message is clear but the reality may be different.
That's why I don't believe in these tests, technically correct, but the evaluation factors are limited to the perimeter of the test.
Well, that's people's fault for not reading the reports carefully in their entirety and doing some investigative report to learn what the test results actually mean - as opposed to what they think it means based upon a few bar charts.
It's human nature at work. 100 % = awesome, I will buy and install that one...
I think these tests are more than just a bit misleading and what you don't know about them can hurt you.