Serious Discussion F-Secure Antivirus – Solid Finnish Security or Overhyped and Overpriced?

F-Secure Antivirus – your take?

  • Running F-Secure Total – VPN + ID monitoring worth the price

  • F-Secure Internet Security – solid core without bloat

  • Tried it, switched to Bitdefender/Norton – better value/extras

  • Uninstalled – too pricey, false positives annoy

  • Great for mobile/EU privacy – my go-to

  • Only for business/home network – management shines

  • Never used – Defender + add-ons gang

  • What’s F-Secure? Finnish underdog?


Results are only viewable after voting.
I used F-secure for awhile earlier last year as its offered free by my ISP before they ditched Deepguard and integrated alot of Avira's SDK's and I liked it, the only issue I ran into and ultimately the reason why i switched off is File exlorer for any operation would at the very least hang for quite awhille, or at worse would completely freeze.
 
@Xciting - I was an F-Prot for DOS & later Windows user, a great light & really nice interface that I used for some time, another brand swallowed up & now gone, at this point there really are few real choices just clones of the same thing, often with continual upsell & features I don't want, not just in AV but most programs, but thank goodness for some open source that people put work into often because they want to - Though most things in life are going the same way, subs, subs, & more subs seem to be how it is.. (rant over, coffee not working yet) :):)
 
@Xciting - I was an F-Prot for DOS & later Windows user, a great light & really nice interface that I used for some time, another brand swallowed up & now gone, at this point there really are few real choices just clones of the same thing, often with continual upsell & features I don't want, not just in AV but most programs, but thank goodness for some open source that people put work into often because they want to - Though most things in life are going the same way, subs, subs, & more subs seem to be how it is.. (rant over, coffee not working yet) :):)
Ikr, f prot used to be my fav to play around with a while ago, everything's subscription based to milk cash out of forgetful customers, f prot became cyren im pretty sure and then cyren got bought out by data443
 
Ikr, f prot used to be my fav to play around with a while ago, everything's subscription based to milk cash out of forgetful customers, f prot became cyren im pretty sure and then cyren got bought out by data443
Data443 uses the Avira antivirus engine. I don't know if F-Prot signatures are still valid.
 
I used F-secure for awhile earlier last year as its offered free by my ISP before they ditched Deepguard and integrated alot of Avira's SDK's and I liked it, the only issue I ran into and ultimately the reason why i switched off is File exlorer for any operation would at the very least hang for quite awhille, or at worse would completely freeze.
DeepGuard was a master F-SECURE marketing campaign.

There is no definitive proof anywhere that shows it to be superior to Avira's behavior blocker or any other security software publisher's behavior blocker.

Times are ah changin'. The old F-SECURE is not the new F-SECURE. The new F-SECURE is 100% focused on shareholders - as it should be for a public company that issued/issues shares.

A lot of what is happening within the consumer security products industry are largely the result of consumer behaviors - which is to say that most people do not want to pay for security software. If consumers don't want to pay, then they won't get what they want, expect, and demand. That is no different than any other product or service - from frying pans to trips to space.

The net result is corporations and companies making moves to get as much blood out of the stone as possible without consideration for the consumers - particularly those disposed to complain about security software products on forums.
 
DeepGuard was a master F-SECURE marketing campaign.

There is no definitive proof anywhere that shows it to be superior to Avira's behavior blocker or any other security software publisher's behavior blocker.
--------------
I don't know, Shadowra seems to keep up with these things in his reviews?

Malware Pack : Remaining 28 files out of 247.
MI have the feeling that the F-Secure era is behind us...
After DeepGuard, which was excellent, F-Secure arrives with Sentry, Avira's module, and suffers from the same problem as the latter: Sentry lags far behind on unknown attacks.
 
But
I don't know, Shadowra seems to keep up with theses things in his reviews?
AS far as I remember, Bullguard did really well in tests. It was not the best, but it was very good.

Now if we compare the performance if BG Sentry and Deepguard, I believe im not qualified to do that kind of comparison.


Btw I really enjoyed using BG back in the time. It was a pleasant experience. I remember purchasing 7 keys (1 year/3 devices) from eBay for less than $25. Those were keycards and the seller was nice to scratch them for me instead of mailing them.
 
I don't know, Shadowra seems to keep up with these things in his reviews?
All behavioural blockers have areas of expertise and areas where they lack, the result depends on what exactly you will test.

F-Secure DeepGuard did not really shine with anything in particular, it was ok but nothing special.
BullGuard Sentry did have an acceptable performance, sometimes on fileless malware (convoluted with loads of calls from one script to another, injection and so on), it would beat Norton SONAR, TM Aegis and so on.

When it comes to executable malware, F-Secure’s aggressive reputation approach was more effective than the over-analysis approach in Sentry.

However, Sentry was acquired by Avira quite some time ago (around 3 years now) and is now integrated with all Avira intelligence.
Technically, Avira can pull one executable from your machine, run it through dynamic analysis, extract a malicious domain and on my machine another executable attempting to merely resolve this domain may be wiped by Sentry.
All these detections are possible and you’d be surprised how easy it is to implement.

Now how F-Secure runs Sentry is another question.

But any degradation in quality is unlikely to be the direct result of replacing DeepGuard with Sentry.
 
I don't know, Shadowra seems to keep up with these things in his reviews?
Shadowra is not a professional tester. In addition, his conclusion(s) are based on a limited data set. Those are just facts and no criticism of the person nor their effort.

The effort required to correctly test a behavior blocker or behavioral protection set requires way more than merely executing "fresh" malware samples and then ascribing the failures to a specific protection component or module. Again, just statements of fact.

Places such as AV-Comparatives, MRG Effitas, professional pentesters, etc with far more resources and capabilities - they all have difficulty in testing behavioral protections in a manner that definitively proves the failure is that specific protection component or module.

Behavior Blockers depend upon, but not limited to:
  • API call sequences
  • Timing
  • Memory allocation patterns
  • Parent/child process chains
  • File system deltas
  • Registry mutation patterns
So any testing must identify and track all of these and more.

If testers could isolate BB behavior, malware authors could too. Therefore, vendors:
  • Don’t document BB triggers
  • Don’t expose BB logs
  • Don’t allow truly isolated BB‑only mode
  • Randomize or cloud‑assist decisions
Result: No transparent way to measure BB performance.

BB testing requires:
  • Instrumented malware
  • API‑level tracing
  • Controlled exploit chains
  • Repeatable behavioral triggers
  • Vendor‑neutral sandboxing
  • Deep forensic logging
Only a few research labs even have this capability — and they don’t publish consumer‑friendly results. Most of that testing is performed and the results provided on a fee-per-service basis.

¯\_(ツ)_/¯

The reality of behavioral blockers in my experience is that they are unreliable when faced with malware generally - even with "old" malware. Some ancient malware can bypass all modern behavior blockers.

Some will definitely not like what I've stated, but it is 100% factual.
 
Shadowra is not a professional tester. In addition, his conclusion(s) are based on a limited data set. Those are just facts and no criticism of the person nor their effort.

The effort required to correctly test a behavior blocker or behavioral protection set requires way more than merely executing "fresh" malware samples and then ascribing the failures to a specific protection component or module. Again, just statements of fact.

Places such as AV-Comparatives, MRG Effitas, professional pentesters, etc with far more resources and capabilities - they all have difficulty in testing behavioral protections in a manner that definitively proves the failure is that specific protection component or module.

Behavior Blockers depend upon, but not limited to:
  • API call sequences
  • Timing
  • Memory allocation patterns
  • Parent/child process chains
  • File system deltas
  • Registry mutation patterns
So any testing must identify and track all of these and more.

If testers could isolate BB behavior, malware authors could too. Therefore, vendors:
  • Don’t document BB triggers
  • Don’t expose BB logs
  • Don’t allow truly isolated BB‑only mode
  • Randomize or cloud‑assist decisions
Result: No transparent way to measure BB performance.

BB testing requires:
  • Instrumented malware
  • API‑level tracing
  • Controlled exploit chains
  • Repeatable behavioral triggers
  • Vendor‑neutral sandboxing
  • Deep forensic logging
Only a few research labs even have this capability — and they don’t publish consumer‑friendly results. Most of that testing is performed and the results provided on a fee-per-service basis.

¯\_(ツ)_/¯

The reality of behavioral blockers in my experience is that they are unreliable when faced with malware generally - even with "old" malware. Some ancient malware can bypass all modern behavior blockers.

Some will definitely not like what I've stated, but it is 100% factual.
You answered, or brought up two things, one being that the testing here should be taken with a grain of salt (as is mentioned in the App Reviews) and be compared with other sources, like AV-Comparatives and Adrian's testing etc. I absolutely admire Shadowra's work and consider it helpful and a resource, but, it is limited (samples, phishing sites).

And you read my mind, as I was thinking how could a BB be tested on its own, just that detection "engine" to know how it really works, as it's usually done in the background or in tandem with the real time scanning? I don't know enough of how that works, and would have to do more research on that, but I appreciate some of the good points you bring up, and some of the food for thought.
 
You answered, or brought up two things, one being that the testing here should be taken with a grain of salt (as is mentioned in the App Reviews) and be compared with other sources, like AV-Comparatives and Adrian's testing etc. I absolutely admire Shadowra's work and consider it helpful and a resource, but, it is limited (samples, phishing sites).

And you read my mind, as I was thinking how could a BB be tested on its own, just that detection "engine" to know how it really works, as it's usually done in the background or in tandem with the real time scanning? I don't know enough of how that works, and would have to do more research on that, but I appreciate some of the good points you bring up, and some of the food for thought.
How does one determine a behavior blocker's capabilities and effectiveness - at least one that is credible and as close as dependable or reliable as one can currently obtain?

You hire a small army of very expensive highly experienced and skilled professional security software testers and pen testers. Whatever they provide you as a result will certainly have a large "Caveats" or "Limitations" section in the findings report(s).

In the absence of that, one uses what they can find and makes their own judgement and decisions based upon their understanding of the data or information. Either that or pentest security software behavior blockers yourself.
 
In the absence of that, one uses what they can find and makes their own judgement and decisions based upon their understanding of the data or information. Either that or pentest security software behavior blockers yourself.
Been there done that but there is a bunch of limitations to that as well, mainly from the point of view that detailed information what leads to a block is typically only in enterprise versions and these are very dificcult to obtain as a single license, often the requirements are 500 stations plus.
 
I don't know, Shadowra seems to keep up with these things in his reviews?
Shadowra attributed DeepGuard effectiveness against a set of malware and other threats based upon the designation recorded as blocked or remediated by DeepGuard in the generic F-SECURE events log.

F-Secure DeepGuard did not really shine with anything in particular, it was ok but nothing special.
DeepGuard was mostly a marketing gimmick and media hyped. It did provide limited capabilities. Many believed F-SECURE marketing and drank the KoolAid.
 
Been there done that but there is a bunch of limitations to that as well, mainly from the point of view that detailed information what leads to a block is typically only in enterprise versions and these are very dificcult to obtain as a single license, often the requirements are 500 stations plus.
To test the BB in the software is going to be very expensive. There is absolutely no doubt about it. It's easily a high five figure or six figure cost project.

One can always larp as a corporate CEO and demand that security software vendors demonstrate their behavior blockers, but you and I both know the larping CEO would know that the publishers were over-stating their products' capabilities.

Start asking knowledgeable pentester questions in any such demonstration and it would be "Vendor - Exit stage left...".
 
A valid antivirus test must respect the "Attack Lifecycle." In the real world, malware does not magically appear in a folder on your desktop; it must traverse the network, bypass browser filters, and survive the write to the disk. By skipping these steps (as Shadowra's method does), a tester ignores roughly 90% of the protection stack, specifically the layers designed to prevent the malware from ever reaching the "Execution" phase where the Behavior Blocker lives.

If one were to test only the Behavioral Blocker via desktop pack, then they need to understand that disabling the Antivirus (Real-Time Protection) during the extraction phase destroys the validity of a "Behavioral Blocker" test. Modern behavioral engines (like F-Secure's DeepGuard or Avira's Sentry) do not operate in a vacuum, they rely on a continuous stream of context. By turning the AV off when the files hit the disk, you strip away the "File Write" event and "Origin Metadata," forcing the Behavior Blocker to assess the threat with partial blindness. This is why bazang labeled the data set "limited."

When Shadowra (or any tester) disables the AV to extract a zip, they break the Protection State Machine in three critical ways:

Loss of the "Write" Event (The I/O Gap)​

  • Real World: A malware file is written to disk by a browser or script. The AV's file-system filter driver intercepts this write operation in real-time. It logs: "Process A (Browser) is writing File B (Malware) to Path C (Temp)."
  • The Flawed Test: If the AV is disabled during extraction, this event happens invisibly. When the AV is turned back on, the file is already "resident" on the disk. The AV has lost the history of how that file arrived.

The "Mark of the Web" (MOTW) Disconnect​

  • Real World: Files downloaded from the internet are tagged with an NTFS Alternate Data Stream (Zone.Identifier). Smart AVs use this tag to increase heuristic sensitivity.
  • The Flawed Test: Extracting a zip (especially with tools like 7-Zip or WinRAR while AV is off) often handles these tags inconsistently. If the AV doesn't witness the creation of the MOTW, it may treat the malware as a "Trusted Local File" rather than a "Risky Download," defaulting to a lower sensitivity to reduce false positives.

"Cold" vs. "Hot" Execution​

  • Hot Execution (Real): The file lands and executes immediately. The BB correlates the drop with the run.
  • Cold Execution (The Test): The file sits on the disk (while the tester scans/prepares). Later, it is executed. The temporal link between "Creation" and "Execution" is broken. As bazang noted in Post #30, "API call sequences" and "Timing" are critical. This test scrambles both.

Conclusion​

This testing method verifies Static On-Demand Detection (can the scanner identify these bytes?), but fails to test Dynamic Behavioral Prevention because the behavioral engine was lobotomized during the infection phase.

Correct Test Protocol

To test properly, the AV must remain Active during the download/extraction. If the AV deletes the file immediately, that is a successful behavioral/signature block, and it should be recorded as such. Disabling the protection to "let the malware run" is creating a scenario that only exists if the user has already been compromised or configured their system incorrectly.

To be quite frank, a valid antivirus test must respect the "Attack Lifecycle." In the real world, malware does not magically appear in a folder on your desktop, it must traverse the network, bypass browser filters, and survive the write to the disk. By skipping these steps (as Shadowra's method does), a tester ignores roughly 90% of the protection stack, specifically the layers designed to prevent the malware from ever reaching the "Execution" phase where the Behavior Blocker lives.

By executing only what the static scan missed, the tester is inadvertently filtering the sample set to the most obscure or newest threats. While this stresses the Behavior Blocker (BB), it doesn't test the Integrated Stack. A real-world test should see if the BB catches common threats that might have barely slipped past a signature update, not just zero-day anomalies.

Which brings me to this point as well. Without the ability to isolate specific protection modules (signatures vs. cloud vs. behavior), it is impossible for a home user to definitively state that a specific behavioral engine (like F-Secure's DeepGuard) is superior or inferior to another. Most "blocks" in amateur tests are likely driven by cloud reputation or static signatures, not local behavioral analysis.

Bazang stated BBs are "unreliable". This is partially true for purely local engines. However, when augmented by cloud ML (Machine Learning), reliability increases significantly. The failure usually happens when the machine is offline or the attack vector mimics a legitimate admin tool (Living off the Land binaries). Which is generally a problem as most testers usually disable the Internet when isolation of the BB module is taking place.
 
Last edited:
A valid antivirus test must respect the "Attack Lifecycle." In the real world, malware does not magically appear in a folder on your desktop; it must traverse the network, bypass browser filters, and survive the write to the disk. By skipping these steps (as Shadowra's method does), a tester ignores roughly 90% of the protection stack, specifically the layers designed to prevent the malware from ever reaching the "Execution" phase where the Behavior Blocker lives.

If one were to test only the Behavioral Blocker via desktop pack, then they need to understand that disabling the Antivirus (Real-Time Protection) during the extraction phase destroys the validity of a "Behavioral Blocker" test. Modern behavioral engines (like F-Secure's DeepGuard or Avira's Sentry) do not operate in a vacuum, they rely on a continuous stream of context. By turning the AV off when the files hit the disk, you strip away the "File Write" event and "Origin Metadata," forcing the Behavior Blocker to assess the threat with partial blindness. This is why bazang labeled the data set "limited."

When Shadowra (or any tester) disables the AV to extract a zip, they break the Protection State Machine in three critical ways:

Loss of the "Write" Event (The I/O Gap)​

  • Real World: A malware file is written to disk by a browser or script. The AV's file-system filter driver intercepts this write operation in real-time. It logs: "Process A (Browser) is writing File B (Malware) to Path C (Temp)."
  • The Flawed Test: If the AV is disabled during extraction, this event happens invisibly. When the AV is turned back on, the file is already "resident" on the disk. The AV has lost the history of how that file arrived.

The "Mark of the Web" (MOTW) Disconnect​

  • Real World: Files downloaded from the internet are tagged with an NTFS Alternate Data Stream (Zone.Identifier). Smart AVs use this tag to increase heuristic sensitivity.
  • The Flawed Test: Extracting a zip (especially with tools like 7-Zip or WinRAR while AV is off) often handles these tags inconsistently. If the AV doesn't witness the creation of the MOTW, it may treat the malware as a "Trusted Local File" rather than a "Risky Download," defaulting to a lower sensitivity to reduce false positives.

"Cold" vs. "Hot" Execution​

  • Hot Execution (Real): The file lands and executes immediately. The BB correlates the drop with the run.
  • Cold Execution (The Test): The file sits on the disk (while the tester scans/prepares). Later, it is executed. The temporal link between "Creation" and "Execution" is broken. As bazang noted in Post #30, "API call sequences" and "Timing" are critical. This test scrambles both.

Conclusion​

This testing method verifies Static On-Demand Detection (can the scanner identify these bytes?), but fails to test Dynamic Behavioral Prevention because the behavioral engine was lobotomized during the infection phase.

Correct Test Protocol

To test properly, the AV must remain Active during the download/extraction. If the AV deletes the file immediately, that is a successful behavioral/signature block, and it should be recorded as such. Disabling the protection to "let the malware run" is creating a scenario that only exists if the user has already been compromised or configured their system incorrectly.

To be quite frank, A valid antivirus test must respect the "Attack Lifecycle." In the real world, malware does not magically appear in a folder on your desktop, it must traverse the network, bypass browser filters, and survive the write to the disk. By skipping these steps (as Shadowra's method does), a tester ignores roughly 90% of the protection stack, specifically the layers designed to prevent the malware from ever reaching the "Execution" phase where the Behavior Blocker lives.

By executing only what the static scan missed, the tester is inadvertently filtering the sample set to the most obscure or newest threats. While this stresses the Behavior Blocker (BB), it doesn't test the Integrated Stack. A real-world test should see if the BB catches common threats that might have barely slipped past a signature update, not just zero-day anomalies.

Which brings me to this point as well. Without the ability to isolate specific protection modules (signatures vs. cloud vs. behavior), it is impossible for a home user to definitively state that a specific behavioral engine (like F-Secure's DeepGuard) is superior or inferior to another. Most "blocks" in amateur tests are likely driven by cloud reputation or static signatures, not local behavioral analysis.

Bazang stated BBs are "unreliable". This is partially true for purely local engines. However, when augmented by cloud ML (Machine Learning), reliability increases significantly. The failure usually happens when the machine is offline or the attack vector mimics a legitimate admin tool (Living off the Land binaries). Which is generally a problem as most testers usually disable the Internet when isolation of the BB module is taking place.

Bookmarked. I agree, samples on the desktop bypass a main component and what I look for, webpage phishing and download protection. Is part of the issue, that malware testers can't video themselves going to how many different sites (or 1 site?) in real time to download and test 30 - 3000 malware samples (if they're still valid), so it's easier and makes for a shorter, more dramatic video, to see all the samples unleased from a desktop folder seeing what the on-access/on-demand scanning does?

YouTube can be the worst for this type of dramatic presentation.
 
I have had & still have a soft spot for Emsisoft, the simplicity of it
I used emsi for a year recently, liked it, but did not renew (running linux more) Not sure emsi is dying, I read they are now macOS beta, could be viewed as an expansion or not enough windows business. With MS Defender getting better, 3d-party AV is not a business I'd venture into.