Hot Take Review based on Shadowra tests

I was the one debating him in that, plus another thread. The fact that he claims that Emsisoft was the only product to block his samples while VIPRE, ESET, Kaspersky, Windows Defender, G-Data and Trend Micro all failed, I find very hard to believe.

Also, he just started to praise Emsisoft/Bitdefender one day ago, so soon after the most recent Emsisoft update, seems sus to me.

Edit: Forgot to mention this earlier, but where is his evidence? No post on his profile involves any actual proof of his tests, from what I can tell.
 
Last edited:
Reddit can be of the least balanced places to hold or view a discussion on a divisive subject like antiviruses. MT has its moments of excitement, but the dogpiling on Reddit with the upvote/downvote system is incredible. It makes me appreciate this site all the more.
 
Reddit can be of the least balanced places to hold or view a discussion on a divisive subject like antiviruses. MT has its moments of excitement, but the dogpiling on Reddit with the upvote/downvote system is incredible. It makes me appreciate this site all the more.
I honestly don't know why I still post in the antivirus Subreddit. If you don't recommend Windows Defender or Bitdefender, you get downvoted.
 
I take this test as I take all others, that they are " for entertainment purposes only", but they can be used as one of many resources to formulate your computer protection. In other words, look at them all but never base your decisions on one, or ten, tests.

We all know that the difference in protection levels between the top 6 or 8 AV's is basically negligible. If your paranoid just use a secondary program such as Simple Windows Hardening, Cyberlock, Appguard, Hitman Pro Alert or Malwarebytes, I've used them all and you can't go wrong with any of them.
;)
 
Last edited:
Btw syscall detection is not magical.

AVs use behavioural monitoring hooks to monitor calls like NTOpenFile, NTAllocateVirtualMemory and so on.

These calls have numbers as well, which are not fixed. Microsoft changes them in major (sometimes even in minor updates) which breaks malware that has them hardcoded.
Numbers are identifiers in the format 0x<number><letter>.

They can be dynamically pulled from memory as well but that’s not the point.

Being able to monitor the direct call to identifier does not warrant detection (unless in the cases where any call to identifier will be considered malicious, which will be wildly unreliable). Like with all other behavioural detection mechanisms, the quality of the classification process (profiles, AI and so on) will determine how well you execute the syscall detection.

In essence, this is one more data point/sensor that allows to see individual actions quicker (as opposed to looking at the outcome which could be late), but is not a miracle that will solve all evasions. It is the same whack-a-mole like everything else and does not affect features like LiveGuard (or other emulators) that focus on analysing the final outcome of running the code before user runs it.

The author post assumes that all pre-execution protections (including emulation, reputation and others) have failed and focuses directly on behavioural monitoring.

I was reserving this for another thread, but here may be a better place.

I’ve discussed pre-execution vs post-execution defences on another thread.
 
Last edited:
If your paranoid
I am; using MS app control
crazy GIF
 
I do not think that the guy mentioned in the OP could be right by only testing AVs by "using various direct syscall techniques" via POCs.
The AV detection is complex nowadays. Most AVs use Machine Learning that often works like a black box, correlating many things in a way cryptic to human analysts. For example, in many cases, AVs can block or prevent direct syscall techniques in the wild because of using correlations with other factors typical of in-the-wild malware (including the full infection chain), which are usually absent in POCs.
 
Last edited:
I do not think that the guy mentioned in the OP could be right by only testing AVs by "using various direct syscall techniques" via POCs.
The AV detection is complex nowadays. Most AVs use Machine Learning that often works like a black box, correlating many things in a way cryptic to human analysts. For example, in many cases, AVs can block or prevent direct syscall techniques in the wild because of using correlations with other factors typical of in-the-wild malware (including the full infection chain), which are usually absent in POCs.
You've perfectly articulated why context is so important in security testing. I've seen a parallel argument made in a different context that leads to the exact same conclusion. 😉

On one hand, we have the scenario I described, detonating malware directly on a desktop. This fails as a realistic test because it lacks the entire delivery and network context.

On the other hand, is the flawed methodology of testing with minimal POCs. This method also fails because it lacks the behavioral context of a complete malware sample.

While these are two distinct testing methods, they are both flawed for the same core reason, they test an action in isolation. It proves that whether you remove the beginning of the attack chain or the complexity of the payload itself, the result is an inaccurate test that can't properly measure a modern security product's intelligence.
 
On one hand, we have the scenario I described, detonating malware directly on a desktop. This fails as a realistic test because it lacks the entire delivery and network context.

On the other hand, is the flawed methodology of testing with minimal POCs. This method also fails because it lacks the behavioral context of a complete malware sample.

Such "tests" can be useful sometimes to the AV vendors. They can decide if the POC (exploit or malware) is significant (cannot be blocked or prevented by other protection layers).
 
Last edited:
I do not think was referring to K app control; may be stating K adopt less restrictive signature policy (manifested by allowing more cracks to run), while relying on behavioral analysis for detecting the threat based on the way it behaves when executed.
Then it's poorly worded. Saying Kaspersky's "design philosophy" is reactive is erroneous. A security software whose first line of defense is to deny execution of unknown software or software signed by uknown vendors is not "reactive", it's rather restrictive. Those who never tried or heard about Kaspersky will logically think that this "reactive" approach is prone to fail. While it's normal for Kaspersky to let many cracks run (detecting them as not-a-virus), these will 100% get blocked in first instance by AC.