AV-Comparatives Consumer Real-World Protection Test February-May 2021

Disclaimer
  1. This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
    We encourage you to compare these results with others and take informed decisions on what security products to use.
    Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.

Gandalf_The_Grey

Level 50
Verified
Trusted
Content Creator
Apr 24, 2016
3,898
The first half-year results of the ongoing Consumer Real-World Protection Test are now available!
Introduction

Malicious software poses an ever-increasing threat, not only due to the number of malware programs increasing, but also due to the nature of the threats. Infection vectors are changing from simple file-based methods to distribution via the Internet. Malware is increasingly focusing on users, e.g. by deceiving them into visiting infected web pages, installing rogue/malicious software or opening emails with malicious attachments. The scope of protection offered by antivirus programs is extended by the inclusion of e.g. URL-blockers, content filtering, cloud reputation systems, ML-based static and dynamic detections and user-friendly behavior-blockers. If these features are perfectly coordinated with the signature-based and heuristic detection, the protection provided against threats increases.

In this test, all protection features of the product can be used to prevent infection – not just signatures or heuristic file scanning. A suite can step in at any stage of the process – accessing the URL, downloading the file, formation of the file on the local hard drive, file access and file execution – to protect the PC. This means that the test achieves the most realistic way of determining how well the security product protects the PC. Because all of a suite’s components can be used to protect the PC, it is possible for a product to score well in the test by having e.g. very good behavioral protection, but a weak URL blocker. However, we would recommend that all parts of a product should be as effective as possible. It should be borne in mind that not all malware enters computer systems via the Internet, and that e.g. a URL blocker is ineffective against malware introduced to a PC via a USB flash drive or over the local area network.

In spite of these technologies, it remains very important that also conventional and non-cloud features such as the signature-based and heuristic detection abilities of antivirus programs continue to be tested. Even with all the protection features available, the growing frequency of zero-day attacks means that some computers will inevitably become infected. As signatures can be updated, they provide the opportunity to recognize and remove malware which was initially missed by the security software. Other protection technologies often offer no means of checking existing data stores for already-infected files, which can be found on the file servers of many companies. Those security layers should be understood as an addition to good detection rates, not as a replacement.

The Real-World Protection test is a joint project of AV-Comparatives and the University of Innsbruck’s Faculty of Computer Science and Quality Engineering. It is partially funded by the Republic of Austria.
Unfortunately, we cannot provide results for Microsoft Defender. During the testing, despite being configured for automatic updates and performing manual updates, parts of Defender were not correctly updated. As there were no error messages, this issue was only discovered at the beginning of June and required a new installation of the OS.
 

MacDefender

Level 14
Verified
Oct 13, 2019
685
I’m not sure I agree with disqualifying windows defender because it malfunctioned. That can happen in the real world too and the results should be presented as is. It’s selective result presenting if a vendor is allowed to say “oops it malfunctioned” as a way of making bad results go away.
 

SeriousHoax

Level 37
Verified
Mar 16, 2019
2,656
My point still stands.
No, it does not. Please check the testing methodology of both tests. I would love to quote one of "struppigel's" comments who is a malware analyst working for G-Data, but I don't remember in which thread that is. Among other things he basically explained briefly how even little things like sample downloading from the internet vs simply extracting from a random archive can have an impact on how an AVs behavioral component reacts.

In the real-world test, samples are downloaded via malicious URL in the browser which is not the case for the malware protection tests. Trend is pretty good in the URL filtering department but its signature isn't very good. But it's very sensitive about unknown files downloaded from the internet. My guess is, this the reason Trend scores high with a very high amount of false positives in this tests.
One of Trend's employees also said that, if the AV sees the PC coming in contact with a lot of malware than the usual then it automatically switches to the hypersensitive mode which as the name suggests is extra aggressive.
So, considering all of these, Trend's result here is very clear and predictable to me.
 

SeriousHoax

Level 37
Verified
Mar 16, 2019
2,656
I’m not sure I agree with disqualifying windows defender because it malfunctioned. That can happen in the real world too and the results should be presented as is. It’s selective result presenting if a vendor is allowed to say “oops it malfunctioned” as a way of making bad results go away.
It can be anything. Maybe a faulty Windows Update, maybe a faulty Defender update, maybe a random bug, anything. This can also happen with a third-party AV. So IMO, it was the correct decision to disqualify Microsoft Defender. I check Defender against malware usually 3-4 days a week and I didn't face any issue. So whatever happened in their test is probably not a universal problem.
 

MacDefender

Level 14
Verified
Oct 13, 2019
685
It can be anything. Maybe a faulty Windows Update, maybe a faulty Defender update, maybe a random bug, anything. This can also happen with a third-party AV. So IMO, it was the correct decision to disqualify Microsoft Defender. I check Defender against malware usually 3-4 days a week and I didn't face any issue. So whatever happened in their test is probably not a universal problem.
I agree that it can happen with any AV. I've had issues with Kaspersky 2020 on a surface pro 7, when it wakes from sleep the AV database stops updating and the cloud functionality all silently stops working because it thinks the network is offline (days layer I get a warning about out of date databases).

I'm mostly just concerned that the aggregate testing scores will miss this datapoint, like MS Defender didn't participate in this test. It's a matter of results presentation. Labeling it as either an inconclusive/malfunction in the overall score or presenting the score with a caveat that it malfunctioned would be more transparent. The reliability of an AV is an important metric as well. It might not be the goal of this test, admittedly, but I still think that silently dropping the product from the table as if it's not tested at all is not the right way to display this result.

(I have nothing against MS Defender, obviously. It's a great product. I'd make this same comment for any other vendor that had their results omitted due to a malfunction)
 

SeriousHoax

Level 37
Verified
Mar 16, 2019
2,656
I agree that it can happen with any AV. I've had issues with Kaspersky 2020 on a surface pro 7, when it wakes from sleep the AV database stops updating and the cloud functionality all silently stops working because it thinks the network is offline (days layer I get a warning about out of date databases).

I'm mostly just concerned that the aggregate testing scores will miss this datapoint, like MS Defender didn't participate in this test. It's a matter of results presentation. Labeling it as either an inconclusive/malfunction in the overall score or presenting the score with a caveat that it malfunctioned would be more transparent. The reliability of an AV is an important metric as well. It might not be the goal of this test, admittedly, but I still think that silently dropping the product from the table as if it's not tested at all is not the right way to display this result.

(I have nothing against MS Defender, obviously. It's a great product. I'd make this same comment for any other vendor that had their results omitted due to a malfunction)
Oh! I see what you mean. In that case, I agree with you. I think MRG-Effitas do what you're suggesting. There was some network configuration error for Norton in a test but they didn't remove Norton's result from the report, instead, they shared the result and added the explanation below.
I guess AVC has a different policy and won't publish results unless the whole test was done without any error.
 

Andy Ful

Level 72
Verified
Trusted
Content Creator
Dec 23, 2014
6,126
...
I'm mostly just concerned that the aggregate testing scores will miss this datapoint, like MS Defender didn't participate in this test.
Yes, I have this problem when calculating the cumulative scorings of some AVs over a long period of time. If there is one such event for several tests it can be approximately solved by using the average. For Defender, in the AV-Comparatives Real-World Consumer tests, the average of missed samples per 2 months is about 2 samples. Defender missed 7 samples in January-March (three times more than usual), so the approximate number of missed samples is 7+2=9 in the January-May period.
The cumulative scorings of Defender counted for the period 2019-2021 are similar to Avast and Avira.
 
Last edited:
Top