- Dec 23, 2014
- 8,605
Andy! I would like to ask you. Are the tests on the av-comparatives and av-test sites relevant? Are these sites reliable?If not, what is the problem with them?
Those tests are considered the most reliable among professional tests. But, their results are often misunderstood. For example, people mainly ignore the information included in the testing methodology:
https://www.av-comparatives.org/real-world-protection-test-methodology/In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples is used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products. Our tests use much more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average.
Here is an example from a recent test:
According to the testing methodology, when the test includes about 500 samples, the first 9 AVs can have similar detection in the wild because they are placed in the same cluster (cluster nr 1). Eset still had a fair chance to be the best in the wild during the testing period, even if it missed 4 samples in that particular test. The same is true for any AV in that cluster.
Most posts in the threads about testing results are related to the illusion that these results are real in the wild. Many more samples are needed to see the real differences.
From my experience, the cumulative results of AV-Comparatives, AV-Test, and SE Labs over two years (over 7000 samples) are required to show the differences in the protection of the top solutions.
Last edited: