App Review Entreprise Antivirus Comparative : Cylance - CrowdStrike - Cynet - DeepInstinct

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
Shadowra

Shadowra

Level 41
Thread author
Verified
Top Poster
Content Creator
Malware Tester
Well-known
High Reputation
Forum Veteran
Sep 2, 2021
3,006
38,503
3,980
29
France
Today, we are going to compare four antivirus programs for businesses: CrowndStrike, Cylance, Cynet, and DeepInstinct.
All four antivirus programs were configured identically for this test using script policies (Malware Lab policy).
The test is also the same: an introductory URL test followed by a malware pack containing a mix of Trojans, infostealers, JS/VBS/HTA/JAVA attacks, some older malware, and various other types of malware.
Let's see how our four challengers defend us.

Please note: Due to a connection issue, I was unable to show and connect to the Cylance panel :/



CrowndStrike : URL 8/9
6 malware out of 79 remain, good blocking despite a script running (Powershell) but appears inactive.
NPE detects 2 threats (remnants) and KVRT 2 as well

Cynet: Cynet blocks 7 out of 9 links. I can't count them on the pack because Cynet renames the files...
During execution, many blocks but one VM infected, a GDI malware even manages to paralyze me.
NPE detects 21 files and KVRT detects 6.

Cylance: It blocks 8 out of 10 links (during the other tests, the link went dead...). 2 pieces of malware got through.
Despite my request to analyze the pack, it doesn't do so!
So I run it, note a few crashes, but the VM gets infected at lightning speed, with patched processes, etc.
NPE detects 6, KVRT detects 11, and the memory is infected.

DeepInstinct: Blocks 8 out of 9 links.
There are 19 files left out of 79, lots of blocks, but one stubborn script at the end.
NPE detects 3 and KVRT detects 2.

@kamiloxf request

=> This is the last video of 2025 (I even broke the thumbnail level to celebrate ^^ ), I'm going to take two weeks off.
See you in mid-January for new videos with a huge change ;)
 
Today, we are going to compare four antivirus programs for businesses: CrowndStrike, Cylance, Cynet, and DeepInstinct.
All four antivirus programs were configured identically for this test using script policies (Malware Lab policy).
The test is also the same: an introductory URL test followed by a malware pack containing a mix of Trojans, infostealers, JS/VBS/HTA/JAVA attacks, some older malware, and various other types of malware.
Let's see how our four challengers defend us.

Please note: Due to a connection issue, I was unable to show and connect to the Cylance panel :/



CrowndStrike : URL 8/9
6 malware out of 79 remain, good blocking despite a script running (Powershell) but appears inactive.
NPE detects 2 threats (remnants) and KVRT 2 as well

Cynet: Cynet blocks 7 out of 9 links. I can't count them on the pack because Cynet renames the files...
During execution, many blocks but one VM infected, a GDI malware even manages to paralyze me.
NPE detects 21 files and KVRT detects 6.

Cylance: It blocks 8 out of 10 links (during the other tests, the link went dead...). 2 pieces of malware got through.
Despite my request to analyze the pack, it doesn't do so!
So I run it, note a few crashes, but the VM gets infected at lightning speed, with patched processes, etc.
NPE detects 6, KVRT detects 11, and the memory is infected.

DeepInstinct: Blocks 8 out of 9 links.
There are 19 files left out of 79, lots of blocks, but one stubborn script at the end.
NPE detects 3 and KVRT detects 2.

@kamiloxf request

=> This is the last video of 2025 (I even broke the thumbnail level to celebrate ^^ ), I'm going to take two weeks off.
See you in mid-January for new videos with a huge change ;)

Thanks for posting the run and the detailed breakdown. A few notes that may help readers interpret the results (and help you tighten future comparisons).

Quick corrections / clarity
  • It’s CrowdStrike (not “CrowndStrike”).
  • Cylance: since you couldn’t connect/show the panel, the test may not reflect a realistic business deployment (missing policy confirmation, cloud lookups/telemetry, containment actions, device isolation, etc.).

Methodology caveats (important for business products)
Endpoint security suites like these often behave very differently depending on management connectivity and response features (EDR actions), not just “static blocking.”
  • Panel connectivity / cloud services: If an agent can’t fully talk to its console, you may be testing a degraded mode (or at least you can’t prove policy actually applied).
  • URL test vs pack execution: URL filtering and endpoint prevention are separate layers; a “URL score” shouldn’t be treated as a direct predictor of runtime protection.
  • Pack composition & realism: A mixed pack is useful, but results can swing heavily based on a few samples (especially scripts/LOLbins, infostealers, or “GDI locker” style malware).

Tracking samples & results more rigorously
Your Cynet note (“renames the files… I can’t count them”) is exactly where comparisons get messy. A few ways to make it deterministic:
  • Create a manifest before testing: SHA-256 + original filename for all 79 samples.
  • After the run, enumerate by hash (not filename) to count what remains/changed/quarantined.
  • Log timestamps: first detection time, prevented at-write vs at-execution, and any rollback/kill-chain actions.

About NPE / KVRT “remnants” results
Second-opinion tools are useful, but they don’t automatically mean “the product failed” (and they can flag inert remnants or potentially unwanted items).
  • If you can, distinguish: active compromise indicators (persistence, injected modules, patched processes, outbound traffic, new services/tasks) vs leftover files.
  • For “memory is infected” cases, it’s helpful to show at least one concrete indicator (e.g., suspicious module in a process, persistence point, or confirmed malicious network callback), without implying conclusions when it’s ambiguous.

What your posted results suggest (for this specific run)
Based on your notes alone (and with the Cylance console limitation):
  • CrowdStrike and Deep Instinct look like the cleanest outcomes in your environment, with fewer leftovers reported by NPE/KVRT.
  • Cynet appears to have blocked a lot during execution but still allowed at least one serious system-impacting infection in one VM (your “paralyze me” note is significant).
  • Cylance appears to underperform here, but without verified console/policy state it’s hard to treat this as representative of a properly managed deployment.

Suggestion for the next comparisons
If you want the “business buyer” angle to be stronger, consider adding one short section per product showing:
  • Agent version + policy name + proof policy is applied
  • Whether cloud connectivity is confirmed
  • What response actions were taken (kill process, quarantine, rollback, isolate host, etc.)

Overall, this is a solid format for a community-facing “hands-on” stress test—just make it very explicit that these are scenario results under your lab conditions, especially when a console connection is missing, so readers don’t overgeneralize the outcome.
 
Thanks for posting the run and the detailed breakdown. A few notes that may help readers interpret the results (and help you tighten future comparisons).

Quick corrections / clarity
  • It’s CrowdStrike (not “CrowndStrike”).
  • Cylance: since you couldn’t connect/show the panel, the test may not reflect a realistic business deployment (missing policy confirmation, cloud lookups/telemetry, containment actions, device isolation, etc.).

Methodology caveats (important for business products)
Endpoint security suites like these often behave very differently depending on management connectivity and response features (EDR actions), not just “static blocking.”
  • Panel connectivity / cloud services: If an agent can’t fully talk to its console, you may be testing a degraded mode (or at least you can’t prove policy actually applied).
  • URL test vs pack execution: URL filtering and endpoint prevention are separate layers; a “URL score” shouldn’t be treated as a direct predictor of runtime protection.
  • Pack composition & realism: A mixed pack is useful, but results can swing heavily based on a few samples (especially scripts/LOLbins, infostealers, or “GDI locker” style malware).

Tracking samples & results more rigorously
Your Cynet note (“renames the files… I can’t count them”) is exactly where comparisons get messy. A few ways to make it deterministic:
  • Create a manifest before testing: SHA-256 + original filename for all 79 samples.
  • After the run, enumerate by hash (not filename) to count what remains/changed/quarantined.
  • Log timestamps: first detection time, prevented at-write vs at-execution, and any rollback/kill-chain actions.

About NPE / KVRT “remnants” results
Second-opinion tools are useful, but they don’t automatically mean “the product failed” (and they can flag inert remnants or potentially unwanted items).
  • If you can, distinguish: active compromise indicators (persistence, injected modules, patched processes, outbound traffic, new services/tasks) vs leftover files.
  • For “memory is infected” cases, it’s helpful to show at least one concrete indicator (e.g., suspicious module in a process, persistence point, or confirmed malicious network callback), without implying conclusions when it’s ambiguous.

What your posted results suggest (for this specific run)
Based on your notes alone (and with the Cylance console limitation):
  • CrowdStrike and Deep Instinct look like the cleanest outcomes in your environment, with fewer leftovers reported by NPE/KVRT.
  • Cynet appears to have blocked a lot during execution but still allowed at least one serious system-impacting infection in one VM (your “paralyze me” note is significant).
  • Cylance appears to underperform here, but without verified console/policy state it’s hard to treat this as representative of a properly managed deployment.

Suggestion for the next comparisons
If you want the “business buyer” angle to be stronger, consider adding one short section per product showing:
  • Agent version + policy name + proof policy is applied
  • Whether cloud connectivity is confirmed
  • What response actions were taken (kill process, quarantine, rollback, isolate host, etc.)

Overall, this is a solid format for a community-facing “hands-on” stress test—just make it very explicit that these are scenario results under your lab conditions, especially when a console connection is missing, so readers don’t overgeneralize the outcome.

I don't really appreciate your response, dear Bot.
The testing methodology is the same as for all antivirus programs. The Cylance panel didn't work on vm, but it did work on my PC (probably a VPN bug).
 
I don't really appreciate your response, dear Bot.
The testing methodology is the same as for all antivirus programs. The Cylance panel didn't work on vm, but it did work on my PC (probably a VPN bug).
Understood — and thanks for clarifying. My intent wasn’t to dismiss your methodology, but to highlight how readers may interpret the Cylance segment given the console visibility gap.

On the methodology being identical
If the same lab policy + same URL test + same pack is used for all products, that’s a fair and consistent approach for a hands-on comparison. I accept that point.

What I was trying to convey is narrower: for some enterprise products, the ability to prove the policy state and cloud connectivity during the run matters to how convincing the result looks to viewers (even if your actual method is consistent across vendors).

About the Cylance panel not working on the VM
If the panel worked on your physical PC and not on the VM, then it’s reasonable to describe this as an environment/connectivity issue (your “VPN bug” explanation fits), rather than “Cylance doesn’t work.” That’s an important distinction, and you already noted it in your post.

A practical way to prevent pushback in the comments is to add 1–2 lines like:
  • Agent installed and running on the VM; console access issue was VM/VPN-related (console works on host PC).
  • Because the console couldn’t be shown, actions/alerts visible in the portal aren’t demonstrated in-video for this run.

How to make the Cylance portion harder to dispute (without changing your test)
Even if you can’t open the portal in the VM, you can still strengthen credibility by showing local, non-portal evidence on the endpoint:
  • A screenshot/clip of the Cylance agent status page (showing it’s active/connected if available).
  • Windows services/processes confirming the agent is running.
  • Local event logs / agent logs showing detections (if the product provides them locally).

What I’d change in my takeaway
Given your clarification, I’d frame it as: the Cylance test run result is still valid as executed, but the portal-side visibility is missing due to an environmental issue, so viewers should interpret it as “performance observed on-endpoint under those conditions,” not a full EDR-managed demonstration.

If you want, paste exactly what you want to write as a short disclaimer for the Cylance segment (1–3 sentences), and I’ll format it in clean XenForo BBCode so it reads neutral and doesn’t invite arguments.
 
Thanks for posting the run and the detailed breakdown. A few notes that may help readers interpret the results (and help you tighten future comparisons).

Quick corrections / clarity
  • It’s CrowdStrike (not “CrowndStrike”).
  • Cylance: since you couldn’t connect/show the panel, the test may not reflect a realistic business deployment (missing policy confirmation, cloud lookups/telemetry, containment actions, device isolation, etc.).

Methodology caveats (important for business products)
Endpoint security suites like these often behave very differently depending on management connectivity and response features (EDR actions), not just “static blocking.”
  • Panel connectivity / cloud services: If an agent can’t fully talk to its console, you may be testing a degraded mode (or at least you can’t prove policy actually applied).
  • URL test vs pack execution: URL filtering and endpoint prevention are separate layers; a “URL score” shouldn’t be treated as a direct predictor of runtime protection.
  • Pack composition & realism: A mixed pack is useful, but results can swing heavily based on a few samples (especially scripts/LOLbins, infostealers, or “GDI locker” style malware).

Tracking samples & results more rigorously
Your Cynet note (“renames the files… I can’t count them”) is exactly where comparisons get messy. A few ways to make it deterministic:
  • Create a manifest before testing: SHA-256 + original filename for all 79 samples.
  • After the run, enumerate by hash (not filename) to count what remains/changed/quarantined.
  • Log timestamps: first detection time, prevented at-write vs at-execution, and any rollback/kill-chain actions.

About NPE / KVRT “remnants” results
Second-opinion tools are useful, but they don’t automatically mean “the product failed” (and they can flag inert remnants or potentially unwanted items).
  • If you can, distinguish: active compromise indicators (persistence, injected modules, patched processes, outbound traffic, new services/tasks) vs leftover files.
  • For “memory is infected” cases, it’s helpful to show at least one concrete indicator (e.g., suspicious module in a process, persistence point, or confirmed malicious network callback), without implying conclusions when it’s ambiguous.

What your posted results suggest (for this specific run)
Based on your notes alone (and with the Cylance console limitation):
  • CrowdStrike and Deep Instinct look like the cleanest outcomes in your environment, with fewer leftovers reported by NPE/KVRT.
  • Cynet appears to have blocked a lot during execution but still allowed at least one serious system-impacting infection in one VM (your “paralyze me” note is significant).
  • Cylance appears to underperform here, but without verified console/policy state it’s hard to treat this as representative of a properly managed deployment.

Suggestion for the next comparisons
If you want the “business buyer” angle to be stronger, consider adding one short section per product showing:
  • Agent version + policy name + proof policy is applied
  • Whether cloud connectivity is confirmed
  • What response actions were taken (kill process, quarantine, rollback, isolate host, etc.)

Overall, this is a solid format for a community-facing “hands-on” stress test—just make it very explicit that these are scenario results under your lab conditions, especially when a console connection is missing, so readers don’t overgeneralize the outcome.
I do not think they like the more accurate bot but prefer the hallucinating one 🤪

@Bot please list a proper testing method for business products then a list a proper testing method for home products ones that mimic real world scenarios.
 
I do not think they like the more accurate bot but prefer the hallucinating one 🤪

@Bot please list a proper testing method for business products then a list a proper testing method for home products ones that mimic real world scenarios.
Ground rules (applies to both business and home testing)
  • Use controlled environments: isolated VMs, no access to personal accounts, no shared folders/clipboard, and ideally no route to your LAN. Snapshot before each run.
  • Prefer emulation over live malware when the goal is “real-world-like” behavior (MITRE-style technique simulation, benign droppers, script runners, credential access simulations, etc.).
  • If you do use live malware, keep it strictly offline/isolated, document chain-of-custody, and never run it on production devices.
  • Repeatability: same OS build, same patch level, same locale, same browser/extensions, same user privilege level, same network conditions, same product versions.
  • Log everything: endpoint event logs, product logs, timestamps, network captures (if safe), and “what happened” notes for each sample/step.

A proper testing method for BUSINESS endpoint products (enterprise EPP/EDR/XDR)
This aims to reflect how businesses actually deploy and rely on these tools (policy + telemetry + response actions), not only “did it block a file.”

1) Define the scope and success criteria
  • Are you testing prevention, detection, response/containment, or all three?
  • Define metrics: time-to-detect, time-to-contain, rollback success, persistence prevented, credential theft prevented, false positives, admin workload.

2) Build a realistic “gold image” endpoint
  • Windows 10/11 Enterprise (or your target OS), fully patched to a fixed date.
  • Business software: Office, PDF reader, browser(s), common runtimes (where appropriate), line-of-business app if you want realism.
  • A standard user profile plus (optionally) a separate local admin account to model real operations.

3) Deploy each product like an enterprise would
  • Install from console using normal onboarding (not “standalone default” unless that’s a real supported mode you want to evaluate).
  • Verify and prove policy applied: show policy name, sensor version, last check-in time, tamper protection state.
  • Confirm cloud connectivity and that the endpoint is visible/healthy in the portal.

4) Use scenario-based technique testing (recommended)
Instead of “79 random malware files,” model full attack chains. Example scenario buckets:
  • Initial access: malicious macro-like behavior, ISO/ZIP delivery, drive-by script, fake installer chain.
  • Execution & LOLBins: PowerShell, mshta, wscript/cscript, rundll32/regsvr32, scheduled tasks.
  • Credential access: simulated LSASS access attempts, browser credential access patterns (in an emulated/safe way).
  • Persistence: services, run keys, scheduled task persistence attempts.
  • Defense evasion: obfuscated scripts, suspicious parent/child process chains, signed binary proxy execution patterns.
  • Exfiltration-like behavior: simulated beaconing to a controlled test server (or safe “dummy” endpoints).
(You can implement these with reputable breach-and-attack simulation / ATT&CK emulation tooling rather than live malware.)

5) Test the “business value” features explicitly
  • Alert quality: does the console tell you what happened, on which host, and with which process tree?
  • Response actions: host isolation, kill process, quarantine, rollback/remediation, block hash/IOC, live response shell (if applicable).
  • Operational friction: how many clicks to contain, how noisy are alerts, what tuning is required?

6) Add “real-world constraints”
  • Occasional packet loss / VPN-like latency (documented), roaming endpoints, endpoints off-network.
  • Test both online and degraded offline mode (clearly labeled).

7) Repeat and report statistically
  • Run each scenario multiple times (or across multiple fresh VMs).
  • Report outcomes by category: prevented, detected-but-allowed, contained after execution, missed, false positive.

A proper testing method for HOME consumer products (AV suites for real-world home use)
This aims to mimic typical home exposure: browsing, email, downloads, scams, and user mistakes—while staying safe and repeatable.

1) Build a typical home baseline
  • Windows 11/10 Home/Pro, fully patched to a fixed date.
  • Browser with common settings, a normal user account, and realistic apps (Discord/Steam/Zoom/Office viewer, etc.).
  • Default Windows protections left on unless the product explicitly replaces them (document changes).

2) Include “everyday” threat vectors (safe-first)
  • Web protection: test with reputable harmless test pages/files (e.g., EICAR-style test file) and known phishing simulation resources.
  • Download reputation: new/unknown installers, potentially unwanted software installers (only if you can validate legality and safety).
  • Email-like flows: attach a ZIP containing benign test artifacts; test whether it warns on risky file types and macro-like behavior.
  • Script abuse: harmless scripts that replicate suspicious behaviors (creating autoruns, launching LOLBins with suspicious flags) without deploying malware.

3) User-driven “mistake” scenarios
  • User clicks “Allow” in a UAC prompt.
  • User disables browser warnings.
  • User runs a “cracked” style installer simulation (benign but behaving similarly: drops files, modifies startup, bundles offers).

4) Measure what home users actually feel
  • Protection outcomes: blocked vs allowed, and whether it explains the risk clearly.
  • Remediation: can it clean up and restore settings?
  • False positives: does it break normal installers or games?
  • Performance and usability: browsing impact, install friction, nagging/upsell behavior (document objectively).

5) Keep it repeatable
  • Fresh snapshot per product.
  • Same browser profile, same extensions, same network, same test set.
  • Document product version + update timestamp right before the run.

Practical reporting template (works for both)
  • Environment: OS build, patch date, VM settings, network mode.
  • Product: version, policy/profile name, cloud connected Y/N, key toggles.
  • Test set: scenarios or sample categories (not just totals).
  • Outcomes: prevented / detected / contained / missed / FP.
  • Evidence: key logs/screenshots and timestamps.

Conclusion
For business products, a “proper” method is scenario-based testing that validates policy + telemetry + response (not only block rates). For home products, a “proper” method emphasizes common exposure routes and user-mistake scenarios with safe, repeatable simulations and clear usability/false-positive reporting.
 
Great test Shadowra! Please keep them coming next year, it's really good to see who is full of hot air and marketing and who isn't!

Of course I'll be there in 2026! :D
I'm just taking a little vacation and I'm going to change a few things on the topics :)

Here's a teaser for the next test: It's an AV bought by Gen Digital.
 
Thank you for the test, @Shadowra
Just ignore the Bots; don't waste your time.
@Shadowra should know by now how I am with testing, I mean I did give him pointers to help him at one point. I was more focused on the irony though that many complained about bots answers not being factual and now they are and they do not like it. I do appreciate being called a bot though, lord knows I wouldn't know anything about testing. Of course I've seen so much of it from you.
 
fwiw, I've been "testing" DeepInstinct every day for about 2 years. No malware, very light. It did block a 2 or 3 apps but I'm not sure I'd call the blocks false positives, depending on how you define "false positive" as those non-malware files did (or do) run in a non-standard way triggering DeepInstinct's instinct to block irregular actions. I'd rather have those blocks and understand them than not to get them. Very specific exceptions corrected the blocks.
 
Today, we are going to compare four antivirus programs for businesses: CrowndStrike, Cylance, Cynet, and DeepInstinct.
All four antivirus programs were configured identically for this test using script policies (Malware Lab policy).
The test is also the same: an introductory URL test followed by a malware pack containing a mix of Trojans, infostealers, JS/VBS/HTA/JAVA attacks, some older malware, and various other types of malware.
Let's see how our four challengers defend us.

Please note: Due to a connection issue, I was unable to show and connect to the Cylance panel :/



CrowndStrike : URL 8/9
6 malware out of 79 remain, good blocking despite a script running (Powershell) but appears inactive.
NPE detects 2 threats (remnants) and KVRT 2 as well

Cynet: Cynet blocks 7 out of 9 links. I can't count them on the pack because Cynet renames the files...
During execution, many blocks but one VM infected, a GDI malware even manages to paralyze me.
NPE detects 21 files and KVRT detects 6.

Cylance: It blocks 8 out of 10 links (during the other tests, the link went dead...). 2 pieces of malware got through.
Despite my request to analyze the pack, it doesn't do so!
So I run it, note a few crashes, but the VM gets infected at lightning speed, with patched processes, etc.
NPE detects 6, KVRT detects 11, and the memory is infected.

DeepInstinct: Blocks 8 out of 9 links.
There are 19 files left out of 79, lots of blocks, but one stubborn script at the end.
NPE detects 3 and KVRT detects 2.

@kamiloxf request

=> This is the last video of 2025 (I even broke the thumbnail level to celebrate ^^ ), I'm going to take two weeks off.
See you in mid-January for new videos with a huge change ;)

Thank you for the amazing work (as always).
 
Of course I'll be there in 2026! :D
I'm just taking a little vacation and I'm going to change a few things on the topics :)

Here's a teaser for the next test: It's an AV bought by Gen Digital.
I will do my best to be here in 2026 and would love to see what the tests bring but as they say; tomorrow is a promise to no one.
 
Of the four, I put my faith in Crowdstrike.
They got the manpower and experience of a powerhouse.
 

Attachments

  • Screenshot_20260114-125855.png
    Screenshot_20260114-125855.png
    284.6 KB · Views: 48