Disclaimer

This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
We encourage you to compare these results with others and take informed decisions on what security products to use.
Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.

Bonorex

Level 1
In my opinion, false positives could be as problematic as real detections. If a security program is known as giving too much false positives, than some users might be tempted to bypass a warning, thinking it's a false positive and get infected. Users may always find justifications why a security suite gave a false positive: very uncommon progam, too old,... The fact is that a professional antivirus should give zero (or close to zero) false positives, no matter how old or widespread a program is. If we look at false positives results there is always a common pattern: some programs always give above average number of false positives, while other never give false positives. Programs like Bitdefender, Kaspersky and ESET are so good, that they give almost no false positives, no matter which websites or programs someone uses to test them. Test after test they always get perfect results. This gives a great confidence in the program, since users know, that when their antivirus displays a critical warning, is almost certainly justified.
 

roger_m

Level 30
Verified
Content Creator
With regards to false positives, based on doing scans of installers at VirusTotal, antiviruses have been getting much better in recent months. As an example, it was common to Avast and AVG to identify installers for PUPs as generic trojans, which doesn't happen anywhere as much anymore. In general it seems that antiviruses are less likely to wrongly identify PUPs as malware, than they have in the past. Since PUP detection in antiviruses is often not that good, the files that were previously identified as malware are now not detected at all.
 

Andy Ful

Level 62
Verified
Trusted
Content Creator
...
The fact is that a professional antivirus should give zero (or close to zero) false positives, no matter how old or widespread a program is.
...
... or maybe the user should choose what is better for him/her or for the family?
There are many good AVs around, let's give the chance to diversity and competition.
For example, I would like to install very aggressive security on the computer of my father. He is a happy-clicker and does not install many applications. I would like that he could install only the very popular and reputable applications.

Edit.
It is probable that after decreasing the detection threshold level of Eset and BitDefender in this test, they could get the perfect detection, too (with a higher amount of false positives).
 
Last edited:

FireHammer

Level 2
Hi, Another thing, I am not allowed to create Threads yet, so I have a question: My ISP STOFA has offered me a lifetime subscription on F-Secure, for free, but I love Bitdefender, but I could save some money, I have 439 days left on my subscription, what should I do?
I do not know so much about F Secure.
 

Andy Ful

Level 62
Verified
Trusted
Content Creator

MacDefender

Level 11
Verified
In my opinion, false positives could be as problematic as real detections. If a security program is known as giving too much false positives, than some users might be tempted to bypass a warning, thinking it's a false positive and get infected. Users may always find justifications why a security suite gave a false positive: very uncommon progam, too old,... The fact is that a professional antivirus should give zero (or close to zero) false positives, no matter how old or widespread a program is. If we look at false positives results there is always a common pattern: some programs always give above average number of false positives, while other never give false positives. Programs like Bitdefender, Kaspersky and ESET are so good, that they give almost no false positives, no matter which websites or programs someone uses to test them. Test after test they always get perfect results. This gives a great confidence in the program, since users know, that when their antivirus displays a critical warning, is almost certainly justified.
I’m certainly not downplaying the importance of low false positives, and accurate signature detections especially for things like PUPs and software piracy tools, which a lot of AVs struggle with accurately identifying (even Windows Defender and BitDefender frequently label such tools under generic trojan or generic machine learning signatures)... But whether it’s signature detection or heuristics, it’s hard to achieve this.

For example, a lot of Windows activation bypass tools will automate disabling SFR so they can replace a DLL with a doctored one to fool Windows into activating against the wrong server. Rufus (the USB stick tool) modifies group policy settings and has code for installing bootloaders. All of these behaviors can easily be rootkit like behaviors used by malware, and it’s really easy to accidentally write signatures that flag these binaries. I did a test by just hexediting a few inconsequential strings in a Rufus release and it was picked up as malware by at least a dozen engines. We just had a recent thread about Kaspersky mis-identifying a Firefox password backup tool as a password stealer.

In reality, most AVs maintain some sort of cloud or offline whitelist of popular applications and those get to simply skip signature detections and sometimes even behavior blocking. That’s where I worry a bit about the accuracy of these formal false positive tests. How sure are we that they are truly low false positive engines, instead of knowing (either via experience or partnerships with the testing firms) what binaries to whitelist or what set of default settings to use to minimize FPs in the tests?


EDIT: Rufus 3.8 | Infected with malware?

Look at that. F-Secure (via Avira), Avira, BitDefender, and a few others all think this is malware. All I did was UPX unpack the stock Rufus binary and repack it at a different compression level. They don’t detect the stock UPX-packed binary. There is clearly some sort of whitelisting going on, and if you run a dynamic test against behavior blockers, you’ll find that many behavior blockers will flag that repacked binary too as soon as it requests UAC elevation to write to group policies.
 
In my opinion, false positives could be as problematic as real detections. If a security program is known as giving too much false positives, than some users might be tempted to bypass a warning, thinking it's a false positive and get infected. Users may always find justifications why a security suite gave a false positive: very uncommon progam, too old,... The fact is that a professional antivirus should give zero (or close to zero) false positives, no matter how old or widespread a program is. If we look at false positives results there is always a common pattern: some programs always give above average number of false positives, while other never give false positives. Programs like Bitdefender, Kaspersky and ESET are so good, that they give almost no false positives, no matter which websites or programs someone uses to test them. Test after test they always get perfect results. This gives a great confidence in the program, since users know, that when their antivirus displays a critical warning, is almost certainly justified.
bitdefender is unfortunately quite annoying with false positives.
 
With regards to false positives, based on doing scans of installers at VirusTotal, antiviruses have been getting much better in recent months. As an example, it was common to Avast and AVG to identify installers for PUPs as generic trojans, which doesn't happen anywhere as much anymore. In general it seems that antiviruses are less likely to wrongly identify PUPs as malware, than they have in the past. Since PUP detection in antiviruses is often not that good, the files that were previously identified as malware are now not detected at all.
this is not good, installers are mostly pups, Avast / avg besides detecting sponsors these things.
 
I’m certainly not downplaying the importance of low false positives, and accurate signature detections especially for things like PUPs and software piracy tools, which a lot of AVs struggle with accurately identifying (even Windows Defender and BitDefender frequently label such tools under generic trojan or generic machine learning signatures)... But whether it’s signature detection or heuristics, it’s hard to achieve this.

For example, a lot of Windows activation bypass tools will automate disabling SFR so they can replace a DLL with a doctored one to fool Windows into activating against the wrong server. Rufus (the USB stick tool) modifies group policy settings and has code for installing bootloaders. All of these behaviors can easily be rootkit like behaviors used by malware, and it’s really easy to accidentally write signatures that flag these binaries. I did a test by just hexediting a few inconsequential strings in a Rufus release and it was picked up as malware by at least a dozen engines. We just had a recent thread about Kaspersky mis-identifying a Firefox password backup tool as a password stealer.

In reality, most AVs maintain some sort of cloud or offline whitelist of popular applications and those get to simply skip signature detections and sometimes even behavior blocking. That’s where I worry a bit about the accuracy of these formal false positive tests. How sure are we that they are truly low false positive engines, instead of knowing (either via experience or partnerships with the testing firms) what binaries to whitelist or what set of default settings to use to minimize FPs in the tests?


EDIT: Rufus 3.8 | Infected with malware?

Look at that. F-Secure (via Avira), Avira, BitDefender, and a few others all think this is malware. All I did was UPX unpack the stock Rufus binary and repack it at a different compression level. They don’t detect the stock UPX-packed binary. There is clearly some sort of whitelisting going on, and if you run a dynamic test against behavior blockers, you’ll find that many behavior blockers will flag that repacked binary too as soon as it requests UAC elevation to write to group policies.
that's why I say windows defender and bitdefender are very boring in false positives. any different program or without digital signature is already a reason for a generic trojan.
 

MacDefender

Level 11
Verified
that's why I say windows defender and bitdefender are very boring in false positives. any different program or without digital signature is already a reason for a generic trojan.
Yeah and I’m personally not a fan of such an approach. Using digital signatures on the vendor or whitelisting hashes of specific files are not ideal. One leaves customers vulnerable if a vendor becomes breached, and the other is volatile against zero day updates (Emsisoft for example has had historical problems with marking brand new Firefox updates as malware, which can get in the way of getting critical security updates)

I think the main point I’m making regarding this test is that it doesn’t tell us enough about false positives. Testing with a corpus of curated samples for false positives doesn’t tell us enough about the way each vendor is addressing false positives, and whether or not those test results will translate to the kinds of false positive events the average user is likely to run into.

It’s easier to argue that in the wild malware samples are generally relevant because, well, they’re in the wild. This random grab bag of esoteric and mostly unsigned software? I’m not convinced that’s what I want for a real-world false positive test.
 
Top