- Apr 15, 2019
- 118
This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
We encourage you to compare these results with others and take informed decisions on what security products to use.
Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.
Well the test was made for people not for MalwareTips.Microsoft Defender was probably in the default settings though
This proves that the default is everything. 99% do not change a thing, they would not know how nor why.AVG and Avast are the best, no surprises.. And they do it for free.
Even though Avast does well in tests it is often criticized for bugs. It is the usual thing of those saying they do not experience the bugs and those that say they do and they cannot cope with the bugs. Yet its free version still remains the most widely installed free product just because it is free and consumer testing organizations promote it as "good enough." More or less the same thing with AVG.Avast/AVG once again consistent with being top notch. I remember the days when they were just "free antiviruses" and nobody would recommend them.
Still waiting for the day they add Application Control.
1.2% difference is not even statistically meaningful. The difference has to be above 10% to be statistically significant. The flaw in this type of testing is multi-fold. For one it is not deterministic nor predictive. The test results apply only to the malware used in the test. Therefore the test results are only a speculative indication of protections. The second thing is that typical users do not download hundreds of files. The small number of files that users download in normal use means that any statistical difference between the various vendors are leveled-out and make no real difference in day-to-day computing practices; Avast, AVG, Microsoft and others will protect essentially the same.AVG and Avast are the best, no surprises.. And they do it for free. Microsoft Defender was probably in the default settings though
Do you think the huge user database Avast/AVG has directly affects its detection capabilities, hence why it's first in the test? I'm sure the telemetry gathered from the millions users' infections has something to do.Even though Avast does well in tests it is often criticized for bugs. It is the usual thing of those saying they do not experience the bugs and those that say they do and they cannot cope with the bugs. Yet its free version still remains the most widely installed free product just because it is free and consumer testing organizations promote it as "good enough." More or less the same thing with AVG.
The majority of the world still wants free software.
The size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.Do you think the huge user database Avast/AVG has directly affects its detection capabilities, hence why it's first in the test? I'm sure the telemetry gathered from the millions users' infections has something to do.
Thanks for your inputThe size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.
Beginning in 2016 Avast and AVG made infrastructure investments that improved their detection rates.
The other thing to be aware of that vendors have gamed AV Comparatives tests for years. Every one of the vendors knows the criteria and the test methodologies. Not one bit of it is kept secret from them. Most of them tune things to optimize test results. I don't think Emsisoft did. In fact, besides the cost (approximately $50,000 per test) Emsisoft I recall because it felt cheated by those that were tuning their product and AV Comparatives did not do a very good job about policing it. Some are better at the tuning than others. The ones that are blatant about it get busted and banned by AV Comparatives, like QiHoo.
The other thing you never learn about are which vendors that do a good job of making improvements to their products based upon lab feedback. Some merely make signatures for those items they missed while others make efforts to improve their other non-signature capabilities. None of the vendors are very forthright about it unless they are called-out, as in the case of Google's Project Zero. When called-out Kaspersky reacts quickly. Norton is reasonably quick. Microsoft is always the slowest to react and that's not because it is dragging its feet. Microsoft's protections are spread across many different points in its operational structure and it takes time to coordinate and orchestrate within such a large environment.
The only thing you can do is look at test results across multiple test labs over a number of years to get a real understanding of how well a product will protect. A product like Avast will do great in AV Comparatives tests but then fail in MRG Effitas simulated banking trojan tests. It's just an example, but the product generally does better than average.
5 Stars & 100% Bars mislead uninformed test result readers to the extent that they have little to no understanding of all the variables involved in testing, what is actually being measured, and what is not included in the measurements. It's the reason that people select the 5 Stars & 100% Bar products and then end up with malware or worse. But the world is a bunch of societies that demand shallow measurements that people do not have the understanding to realize what they're actually looking at.
We can say the same for most, if not all, tests. They are snapshots in time, like election polls.Therefore the test results are only a speculative indication of protections.
The size of the userbase does factor into performance but it is only a small piece of the puzzle. If userbase were the primary factor then Microsoft would blow every other security product out of the water since its base are those in its 1+ billion Windows systems that provide telemetry to its threat intelligence ecosystem. What is never discussed - because there is virtually no visibility into any of it - is how wildly different vendors' backends and technologies vary.
Beginning in 2016 Avast and AVG made infrastructure investments that improved their detection rates.
The other thing to be aware of that vendors have gamed AV Comparatives tests for years. Every one of the vendors knows the criteria and the test methodologies. Not one bit of it is kept secret from them. Most of them tune things to optimize test results. I don't think Emsisoft did. In fact, besides the cost (approximately $50,000 per test) Emsisoft I recall because it felt cheated by those that were tuning their product and AV Comparatives did not do a very good job about policing it. Some are better at the tuning than others. The ones that are blatant about it get busted and banned by AV Comparatives, like QiHoo.
The other thing you never learn about are which vendors that do a good job of making improvements to their products based upon lab feedback. Some merely make signatures for those items they missed while others make efforts to improve their other non-signature capabilities. None of the vendors are very forthright about it unless they are called-out, as in the case of Google's Project Zero. When called-out Kaspersky reacts quickly. Norton is reasonably quick. Microsoft is always the slowest to react and that's not because it is dragging its feet. Microsoft's protections are spread across many different points in its operational structure and it takes time to coordinate and orchestrate within such a large environment.
The only thing you can do is look at test results across multiple test labs over a number of years to get a real understanding of how well a product will protect. A product like Avast will do great in AV Comparatives tests but then fail in MRG Effitas simulated banking trojan tests. It's just an example, but the product generally does better than average.
5 Stars & 100% Bars mislead uninformed test result readers to the extent that they have little to no understanding of all the variables involved in testing, what is actually being measured, and what is not included in the measurements. It's the reason that people select the 5 Stars & 100% Bar products and then end up with malware or worse. But the world is a bunch of societies that demand shallow measurements that people do not have the understanding to realize what they're actually looking at.
Why are you yelling, we all know these tests are for entertainment purposes only.I’M GOING TO PAY ALL OF THE MONEY FOR DETECTION OF JUST A FEW MORE SAMPLES THAT I WOULD NEVER ENCOUNTER IN REAL LIFE!!!!!!
Please don`t make him even angrier, DON`T!!!Why are you yelling, we all know these tests are for entertainment purposes only.
YELLING IS FOR ENTERTAINMENT PURPOSES ONLY!!!Why are you yelling, we all know these tests are for entertainment purposes only.
This is a fact of protection life cycle, with some being much more resistant to changes in the threat landscape and others not so much.We can say the same for most, if not all, tests. They are snapshots in time, like election polls.
Vendors gaming the tests is common knowledge amongst those in the know within the industry. It is no false accusation. It is a statement of fact.It's a choice between published test results and accusations by forum anons of gamed results.
Vendors gaming the tests is common knowledge amongst those in the know within the industry. It is no false accusation. It is a statement of fact.
Just a single example:You can make it a statement of fact by proving it.