App Review AV test #5 - underrated Emsisoft vs recent malware

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
rifteyy
Protection-wise, Emsisoft did not perform well according to @Shadowra test.
The user is executing malware in a way that bypasses the crucial, multi-layered security defenses designed to stop threats at their earliest stages. This method is not representative of a real-world infection and provides a misleading picture of a security product's effectiveness.
 
The user is executing malware in a way that bypasses the crucial, multi-layered security defenses designed to stop threats at their earliest stages. This method is not representative of a real-world infection and provides a misleading picture of a security product's effectiveness.
how?
 
  • Like
Reactions: Sorrento
Skipping Behavioral Analysis Triggers, many modern malware variants use execution triggers to avoid detection. They might not be malicious until they are executed in a specific way, such as after an internet connection is established or after a certain amount of time has passed. By manually placing and executing the files, the user's test doesn't mimic the behavior the malware is looking for, or the anti-malware's behavioral monitor is being presented with a file that looks benign until it begins to act maliciously. The anti-malware might stop it, but not at the initial stage, which a real-world test would capture.
 
I have 64 gig of fast ram so for me ram use is what its for, if you don't have much ram you do need less ram intensive programs i suppose - IMHO Emsisoft have been quite open for years about ram use, & as i remember a while back it was shown again as I remember ESET for example was using far more than at first appears, ESET is not on their own, ram use is not the only issue I feel.

Shadowra's recent test showed that Emsisoft may be slipping somewhat but an AV does not overnight go from being OK to dead, a multi AV test a few months on here showed Emsisoft to be pretty good.
 
I have 64 gig of fast ram so for me ram use is what its for, if you don't have much ram you do need less ram intensive programs i suppose - IMHO Emsisoft have been quite open for years about ram use, & as i remember a while back it was shown again as I remember ESET for example was using far more than at first appears, ESET is not on their own, ram use is not the only issue I feel.
Indeed RAM usage is not the first variable to be considered when choosing a suitable AV, but it is one of the variables and cannot be disregarded.
And in situations where RAM is limited and using specific AV will render my machine crawling, I will sacrifice 1% more detection rate for a functioning machine.
 
Skipping Behavioral Analysis Triggers, many modern malware variants use execution triggers to avoid detection. They might not be malicious until they are executed in a specific way, such as after an internet connection is established or after a certain amount of time has passed. By manually placing and executing the files, the user's test doesn't mimic the behavior the malware is looking for, or the anti-malware's behavioral monitor is being presented with a file that looks benign until it begins to act maliciously. The anti-malware might stop it, but not at the initial stage, which a real-world test would capture.
Manual placement and execution of files still simulate a realistic attack vector—many malware samples are indeed delivered as standalone executables or via droppers that behave immediately upon execution. Furthermore, security solutions are designed to protect users not only at the exact moment of execution but also when files are written to disk, scanned on access, or during heuristic and reputation checks.

In fact, waiting for special conditions to activate (like network availability) is an additional layer, not the baseline. A competent anti-malware solution should still flag the file at static analysis or sandboxing stages before the trigger conditions are met. Therefore, manual execution remains a valid way to test whether the security product can identify and stop malware—especially at early stages—rather than relying solely on delayed behavior that may or may not be triggered in a given test environment.
 
Manual placement and execution of files still simulate a realistic attack vector—many malware samples are indeed delivered as standalone executables or via droppers that behave immediately upon execution. Furthermore, security solutions are designed to protect users not only at the exact moment of execution but also when files are written to disk, scanned on access, or during heuristic and reputation checks.

In fact, waiting for special conditions to activate (like network availability) is an additional layer, not the baseline. A competent anti-malware solution should still flag the file at static analysis or sandboxing stages before the trigger conditions are met. Therefore, manual execution remains a valid way to test whether the security product can identify and stop malware—especially at early stages—rather than relying solely on delayed behavior that may or may not be triggered in a given test environment.
You listed several sophisticated defenses.

Static analysis
Sandboxing
Heuristic and reputation checks

These are all excellent and necessary security layers. However, modern malware is designed specifically to evade these checks. This is where the "true route of infection" and its specific triggers come into play.

The "Cat-and-Mouse" Game.

Why Triggers Matter

Malware authors know that security software is using these defenses. So, they design their malware to be dormant or benign-looking until it reaches the final stage of the attack chain. Here are a few common evasion techniques that are specifically designed to be triggered only during a real-world infection:

Droppers

A very common type of malware is a dropper. It's a small, seemingly harmless program that does little on its own. Its only job is to get past the initial static and reputation checks, and then, once it's on the user's system, it downloads the final, malicious payload from a remote server. A test that involves placing a zip file on the desktop and running it completely bypasses the dropper's primary function and its associated network traffic, which would be a huge red flag for a good security suite.

Anti-Sandbox Techniques

Malware is often designed to detect if it's running in a virtual machine or a sandbox. It might check for specific hardware characteristics, look for a lack of user activity, or check for specific filenames. If it detects a testing environment, it will not perform its malicious actions, making the security product look ineffective.

Conditional Execution

Some malware will only execute its malicious code under specific conditions. For example, it might, wait until a specific time of day or a specific date.

Only execute if a user is logged in for a certain period of time (to evade rapid, automated tests).

Check for a specific network address to ensure it's on a legitimate network and not a lab's test network.

Only activate after a connection to a Command-and-Control (C2) server is established.

A test that simply runs an executable from a local folder on a desktop bypasses all of these crucial, real-world evasion techniques.

The "true route of infection" is crucial because it initiates the specific triggers and behaviors that malware uses to evade the very defenses the you mentioned. It tests the security suite's ability to stop a threat at every stage of the "kill chain," from the initial contact to the final execution.
 
Last edited:
  • +Reputation
Reactions: simmerskool
You listed several sophisticated defenses.

Static analysis
Sandboxing
Heuristic and reputation checks

These are all excellent and necessary security layers. However, modern malware is designed specifically to evade these checks. This is where the "true route of infection" and its specific triggers come into play.

The "Cat-and-Mouse" Game.

Why Triggers Matter

Malware authors know that security software is using these defenses. So, they design their malware to be dormant or benign-looking until it reaches the final stage of the attack chain. Here are a few common evasion techniques that are specifically designed to be triggered only during a real-world infection:

Droppers

A very common type of malware is a dropper. It's a small, seemingly harmless program that does little on its own. Its only job is to get past the initial static and reputation checks, and then, once it's on the user's system, it downloads the final, malicious payload from a remote server. A test that involves placing a zip file on the desktop and running it completely bypasses the dropper's primary function and its associated network traffic, which would be a huge red flag for a good security suite.

Anti-Sandbox Techniques

Malware is often designed to detect if it's running in a virtual machine or a sandbox. It might check for specific hardware characteristics, look for a lack of user activity, or check for specific filenames. If it detects a testing environment, it will not perform its malicious actions, making the security product look ineffective.

Conditional Execution

Some malware will only execute its malicious code under specific conditions. For example, it might, wait until a specific time of day or a specific date.

Only execute if a user is logged in for a certain period of time (to evade rapid, automated tests).

Check for a specific network address to ensure it's on a legitimate network and not a lab's test network.

Only activate after a connection to a Command-and-Control (C2) server is established.

A test that simply runs an executable from a local folder on a desktop bypasses all of these crucial, real-world evasion techniques.

The "true route of infection" is crucial because it initiates the specific triggers and behaviors that malware uses to evade the very defenses the you mentioned. It tests the security suite's ability to stop a threat at every stage of the "kill chain," from the initial contact to the final execution.
While triggers like droppers, anti-sandbox checks, and conditional execution are indeed part of the malware “cat-and-mouse” game, the argument overstates their uniqueness. Security products are not evaluated solely by running a file from the desktop—vendors test against full infection chains, including droppers and C2 traffic, in controlled environments. Moreover, strong protection should detect malicious artifacts before trigger conditions (via signatures, heuristics, ML models, and cloud lookups).


If malware only “looks bad” after a contrived trigger, that doesn’t mean manual execution testing is invalid—it simply shows the product is doing its job at later stages. In real-world protection, stopping malware at any stage of the chain is success, not failure.
 
While triggers like droppers, anti-sandbox checks, and conditional execution are indeed part of the malware “cat-and-mouse” game, the argument overstates their uniqueness. Security products are not evaluated solely by running a file from the desktop—vendors test against full infection chains, including droppers and C2 traffic, in controlled environments. Moreover, strong protection should detect malicious artifacts before trigger conditions (via signatures, heuristics, ML models, and cloud lookups).


If malware only “looks bad” after a contrived trigger, that doesn’t mean manual execution testing is invalid—it simply shows the product is doing its job at later stages. In real-world protection, stopping malware at any stage of the chain is success, not failure.
A manual execution test is not completely invalid. In reality, it's just an incomplete picture of a product's full capabilities. You are correct that stopping malware at any stage is a success.

However, a true route of infection test is still the superior methodology because it simulates the full chain of events a threat uses to compromise a system.

Malware authors are constantly developing new ways to evade these tools. They use conditional execution to avoid showing their true nature to a security scanner. A real-world test forces the malware to follow its intended path, triggering its full malicious behavior and forcing the security product to respond.
 
Would you kindly provide url of websites offering such kind of competent test to be some sort of guidance for MT members.
I’ve already promised to create a thread, but I am busy with another project called “Analyse It!”.
Thread 'Introducing Analyse It!'
Introducing Analyse It!

I can only do so much hahah…
 
I’ve already promised to create a thread, but I am busy with another project called “Analyse It!”.
Thread 'Introducing Analyse It!'
Introducing Analyse It!

I can only do so much hahah…
Yes, I have noticed the thread; but it is a techniqual issue for testers.
I am waiting to read threads using the tool to evaluate different security solutions comparatively.
 
Would you kindly provide url of websites offering such kind of competent test to be some sort of guidance for MT members.
AV-Comparatives
Website: https://www.av-comparatives.org

Direct Link to Real-World Protection Test Methodology: Real-World Protection Test Methodology

AV-TEST
Website: AV-TEST | Antivirus & Security Software & AntiMalware Reviews

Direct Link to Test Procedures: Antivirus Testing Procedures | AVTest Institute

AVLab Cybersecurity Foundation
Website: AVLab Cybersecurity Foundation

Direct Link to Test Methodology: Methodology » AVLab Cybersecurity Foundation
 
Yes, I have noticed the thread; but it is a techniqual issue for testers.
I am waiting to read threads using the tool to evaluate different security solutions comparatively.
I am working to make the tool valuable to everyone, not just testers. It's just I am reusing components of the tool for testers so I don't have to rewrite from scratch. But there is a lot more to come.
 
AV-Comparatives
Website: https://www.av-comparatives.org

Direct Link to Real-World Protection Test Methodology: Real-World Protection Test Methodology

AV-TEST
Website: AV-TEST | Antivirus & Security Software & AntiMalware Reviews

Direct Link to Test Procedures: Antivirus Testing Procedures | AVTest Institute

AVLab Cybersecurity Foundation
Website: AVLab Cybersecurity Foundation

Direct Link to Test Methodology: Methodology » AVLab Cybersecurity Foundation

Only AvLabs is reliable, my tests, MalwareTips Tests and those of @Trident , the rest is in the garbage can ;)
 
AV-Comparatives
Website: https://www.av-comparatives.org

Direct Link to Real-World Protection Test Methodology: Real-World Protection Test Methodology

AV-TEST
Website: AV-TEST | Antivirus & Security Software & AntiMalware Reviews

Direct Link to Test Procedures: Antivirus Testing Procedures | AVTest Institute

AVLab Cybersecurity Foundation
Website: AVLab Cybersecurity Foundation

Direct Link to Test Methodology: Methodology » AVLab Cybersecurity Foundation
Among the three websites, I can find only AVC to be the nearest to real-world scenario, in addition to @Shadowra tests; AV-test just acceptable; AVLab test not to be considered.
Why not to be considered?
Screenshot_23-8-2025_172159_avlab.pl.jpeg
Screenshot_23-8-2025_172218_avlab.pl.jpeg
Screenshot_23-8-2025_172237_avlab.pl.jpeg

 
  • Like
Reactions: Khushal
Only AvLabs is reliable, my tests, MalwareTips Tests and those of @Trident , the rest is in the garbage can ;)
I understand your preference for AVLab's testing. They are indeed an excellent lab, and their focus on 'Living off the Land Binaries' (LOLBins) and live threats makes their tests very insightful. I completely agree that they are a reliable source.
However, the major testing labs that you've dismissed—AV-Comparatives, AV-TEST, are considered the industry standard for a reason, and it comes down to their methodology.

While a manual test of a zip file on a desktop can be a good starting point, it's not a complete picture. These professional labs don't just test one stage of an attack. They use a comprehensive approach that simulates the entire infection process, from a user clicking a malicious link to the final execution of a threat.

These labs use very large, continuously updated databases of both clean and malicious files to ensure their tests are not only thorough but also accurate. They also adhere to strict standards from organizations like the Anti-Malware Testing Standards Organization (AMTSO) to ensure transparency and objectivity.

While a test done by an individual or a small group of enthusiasts has its place, it simply cannot replicate the scale, rigor, and controlled environments of these professional labs. Dismissing them as 'in the garbage can' overlooks their critical role in ensuring that vendors are held to the highest standards of protection against the full spectrum of real-world threats.

Incomplete tests are not just inaccurate, they are misleading, and that is a very important distinction to make.

A misleading test can be more harmful than no test at all, because it can give users a false sense of security or cause them to make a poor decision.
 
Protection-wise, Emsisoft did not perform well according to @Shadowra test.
That is just one single data point. There's lots of certified lab tests where Emsisoft performed consistently well, but even over time with those well designed test scenarios at certified labs - over time even the "top" performers have their ups-and-downs across time.