AV-Comparatives AV-Comparatives - Real World Protection Test Feb-May 2024

Disclaimer
  1. This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
    We encourage you to compare these results with others and take informed decisions on what security products to use.
    Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.

TairikuOkami

Level 36
Verified
Top Poster
Content Creator
Well-known
May 13, 2017
2,551
Microsoft Defender was probably in the default settings though
Well the test was made for people not for MalwareTips. ;)
AVG and Avast are the best, no surprises.. And they do it for free.
This proves that the default is everything. 99% do not change a thing, they would not know how nor why.
 

kailyn

Level 2
Jun 6, 2024
63
Avast/AVG once again consistent with being top notch. I remember the days when they were just "free antiviruses" and nobody would recommend them.

Still waiting for the day they add Application Control.
Even though Avast does well in tests it is often criticized for bugs. It is the usual thing of those saying they do not experience the bugs and those that say they do and they cannot cope with the bugs. Yet its free version still remains the most widely installed free product just because it is free and consumer testing organizations promote it as "good enough." More or less the same thing with AVG.

The majority of the world still wants free software.

Avast lost a lot of credibility because it was selling user data. In fact it just lost against administrative actions and lawsuits over its privacy violations in multiple nations.

AVG and Avast are the best, no surprises.. And they do it for free. Microsoft Defender was probably in the default settings though
1.2% difference is not even statistically meaningful. The difference has to be above 10% to be statistically significant. The flaw in this type of testing is multi-fold. For one it is not deterministic nor predictive. The test results apply only to the malware used in the test. Therefore the test results are only a speculative indication of protections. The second thing is that typical users do not download hundreds of files. The small number of files that users download in normal use means that any statistical difference between the various vendors are leveled-out and make no real difference in day-to-day computing practices; Avast, AVG, Microsoft and others will protect essentially the same.
 

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,451
Even though Avast does well in tests it is often criticized for bugs. It is the usual thing of those saying they do not experience the bugs and those that say they do and they cannot cope with the bugs. Yet its free version still remains the most widely installed free product just because it is free and consumer testing organizations promote it as "good enough." More or less the same thing with AVG.

The majority of the world still wants free software.
Do you think the huge user database Avast/AVG has directly affects its detection capabilities, hence why it's first in the test? I'm sure the telemetry gathered from the millions users' infections has something to do.
 

kailyn

Level 2
Jun 6, 2024
63
Do you think the huge user database Avast/AVG has directly affects its detection capabilities, hence why it's first in the test? I'm sure the telemetry gathered from the millions users' infections has something to do.
The size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.

Beginning in 2016 Avast and AVG made infrastructure investments that improved their detection rates.

The other thing to be aware of that vendors have gamed AV Comparatives tests for years. Every one of the vendors knows the criteria and the test methodologies. Not one bit of it is kept secret from them. Most of them tune things to optimize test results. I don't think Emsisoft did. In fact, besides the cost (approximately $50,000 per test) Emsisoft I recall because it felt cheated by those that were tuning their product and AV Comparatives did not do a very good job about policing it. Some are better at the tuning than others. The ones that are blatant about it get busted and banned by AV Comparatives, like QiHoo.

The other thing you never learn about are which vendors that do a good job of making improvements to their products based upon lab feedback. Some merely make signatures for those items they missed while others make efforts to improve their other non-signature capabilities. None of the vendors are very forthright about it unless they are called-out, as in the case of Google's Project Zero. When called-out Kaspersky reacts quickly. Norton is reasonably quick. Microsoft is always the slowest to react and that's not because it is dragging its feet. Microsoft's protections are spread across many different points in its operational structure and it takes time to coordinate and orchestrate within such a large environment.

The only thing you can do is look at test results across multiple test labs over a number of years to get a real understanding of how well a product will protect. A product like Avast will do great in AV Comparatives tests but then fail in MRG Effitas simulated banking trojan tests. It's just an example, but the product generally does better than average.

5 Stars & 100% Bars mislead uninformed test result readers to the extent that they have little to no understanding of all the variables involved in testing, what is actually being measured, and what is not included in the measurements. It's the reason that people select the 5 Stars & 100% Bar products and then end up with malware or worse. But the world is a bunch of societies that demand shallow measurements that people do not have the understanding to realize what they're actually looking at.
 

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,451
The size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.

Beginning in 2016 Avast and AVG made infrastructure investments that improved their detection rates.

The other thing to be aware of that vendors have gamed AV Comparatives tests for years. Every one of the vendors knows the criteria and the test methodologies. Not one bit of it is kept secret from them. Most of them tune things to optimize test results. I don't think Emsisoft did. In fact, besides the cost (approximately $50,000 per test) Emsisoft I recall because it felt cheated by those that were tuning their product and AV Comparatives did not do a very good job about policing it. Some are better at the tuning than others. The ones that are blatant about it get busted and banned by AV Comparatives, like QiHoo.

The other thing you never learn about are which vendors that do a good job of making improvements to their products based upon lab feedback. Some merely make signatures for those items they missed while others make efforts to improve their other non-signature capabilities. None of the vendors are very forthright about it unless they are called-out, as in the case of Google's Project Zero. When called-out Kaspersky reacts quickly. Norton is reasonably quick. Microsoft is always the slowest to react and that's not because it is dragging its feet. Microsoft's protections are spread across many different points in its operational structure and it takes time to coordinate and orchestrate within such a large environment.

The only thing you can do is look at test results across multiple test labs over a number of years to get a real understanding of how well a product will protect. A product like Avast will do great in AV Comparatives tests but then fail in MRG Effitas simulated banking trojan tests. It's just an example, but the product generally does better than average.

5 Stars & 100% Bars mislead uninformed test result readers to the extent that they have little to no understanding of all the variables involved in testing, what is actually being measured, and what is not included in the measurements. It's the reason that people select the 5 Stars & 100% Bar products and then end up with malware or worse. But the world is a bunch of societies that demand shallow measurements that people do not have the understanding to realize what they're actually looking at.
Thanks for your input
 

monkeylove

Level 12
Verified
Top Poster
Well-known
Mar 9, 2014
562
The size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.

Beginning in 2016 Avast and AVG made infrastructure investments that improved their detection rates.

The other thing to be aware of that vendors have gamed AV Comparatives tests for years. Every one of the vendors knows the criteria and the test methodologies. Not one bit of it is kept secret from them. Most of them tune things to optimize test results. I don't think Emsisoft did. In fact, besides the cost (approximately $50,000 per test) Emsisoft I recall because it felt cheated by those that were tuning their product and AV Comparatives did not do a very good job about policing it. Some are better at the tuning than others. The ones that are blatant about it get busted and banned by AV Comparatives, like QiHoo.

The other thing you never learn about are which vendors that do a good job of making improvements to their products based upon lab feedback. Some merely make signatures for those items they missed while others make efforts to improve their other non-signature capabilities. None of the vendors are very forthright about it unless they are called-out, as in the case of Google's Project Zero. When called-out Kaspersky reacts quickly. Norton is reasonably quick. Microsoft is always the slowest to react and that's not because it is dragging its feet. Microsoft's protections are spread across many different points in its operational structure and it takes time to coordinate and orchestrate within such a large environment.

The only thing you can do is look at test results across multiple test labs over a number of years to get a real understanding of how well a product will protect. A product like Avast will do great in AV Comparatives tests but then fail in MRG Effitas simulated banking trojan tests. It's just an example, but the product generally does better than average.

5 Stars & 100% Bars mislead uninformed test result readers to the extent that they have little to no understanding of all the variables involved in testing, what is actually being measured, and what is not included in the measurements. It's the reason that people select the 5 Stars & 100% Bar products and then end up with malware or worse. But the world is a bunch of societies that demand shallow measurements that people do not have the understanding to realize what they're actually looking at.

It's a choice between published test results and accusations by forum anons of gamed results.
 

kailyn

Level 2
Jun 6, 2024
63
We can say the same for most, if not all, tests. They are snapshots in time, like election polls.
This is a fact of protection life cycle, with some being much more resistant to changes in the threat landscape and others not so much.

If you would throw a lot of 16 and 32 bit malware at signature based solutions, people would have meltdowns and indignation that they are not detected. It does not matter to them that those malware cannot even run on their 64 bit system but we're talking about people here and their irrational beliefs so... Also malware signatures do not last forever. There are short term signatures and then those in turn get converted into long term signatures. Beginning about 5 years older signatures begin to be purged from signature databases. Like, vendors cannot keep every signature that they ever created and cache them. It is not practical nor feasible.

Just look at the range of results from one test to the next. In particular, the synthetic banking tests by MRG Effitas. One solution does great, but then fails the next time. They fix it, and do great, but then fail on the next test. Solutions such as Avast, Norton, and ESET seem to follow that cycle, or they did. Not sure if they even participate in MRG tests any longer. Every time MRG switches up the tests you can pretty much predict which one will fail and which will pass.

The protections that are most resistant to changes in the threat landscape are default deny: that which is not allowed is denied or virtualization. But those too have to receive periodic updates, mostly because Microsoft makes changes to their OS which breaks stuff. Not because there is a revolutionary change in the threat landscape. That landscape is pretty much unchanged in terms of Attack Kill-Chain and Tactics, Techniques, and Processes (TTPs). LOLBins, GTFOBins, etc have been more or less static for years. Recently there was a few Azure LOLBins discovered, but customers do not have access to those; only Microsoft does and they have either locked them down on their infrastructure or made changes to how they work to mitigate their abuse.

One cannot extrapolate any test results with absolute certainty, but people do it all the time. Tests are not predictive over the long term and definitely not deterministic.
 

kailyn

Level 2
Jun 6, 2024
63
You can make it a statement of fact by proving it.
Just a single example:



There are discussions here at MT about vendors optimizing their product for the AV-Comaparatives tests. Fabian Wosar made posts here here about the problem and it does not just apply to Tencent and Qihoo.

Did you know that the signature engines on VirusTotal are not the ones that are in the consumer and enterprise products (this has always been the case)? Did you know that vendors copy each others' signatures from VirusTotal (Proven by Eugene Kaspersky and his team)?
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top