I'm confused about how exactly CF works to protect the system and am looking for some clarification. I've watched CS's video regarding her setup and read numerous threads about it, but I'm still not quite understanding what makes it so great. I've seen mention of it being default-deny, but I'm not seeing that. AFAICT, the only protections it provides are cloud scanning and containment, the latter being what CS seems to praise it for and what seems to be what makes it so special, unless I'm missing something. And if I'm understanding that correctly, the benefit it provides is that apps that aren't known to be malicious, and therefore wouldn't be blocked by a traditional AV, but are also unknown and therefore may be malicous, will be contained. If that's the case, I can definitely see the appeal, and it seems that would make it worth using it even if just for that feature, with the other stuff, including the firewall, disabled in order to use better alternatives. But I want to be sure I'm understanding it correctly, as I don't want to be using it if it's not actually providing any added protection.
I plan to run VoodooShield and/or SecureAPlus + OSArmor + SysHardener and possibly a third-party AV (I'm leaning toward Emsisoft paid or Bitdefender free/paid or possibly FortiClient free) since I absolutely don't want to use WD, though SAP might be enough, especially combined with CF. VS/SAP would provide a significant level of protection as anti-exes, not to mention VS's VoodooAI and cloud scanner and SAP's cloud scanner and local AV, and OSArmor and SysHardener would of course add more layers, and the third-party AV would basically be an extra level of protection in case the user simply clicks allow on those for something that's malicious, as an added layer for human error. And while there's a lot of conflicting opinions about WD, I personally don't think it's very good based on test results and I've had mixed results myself with it (testing with a malware file, sometimes it will flag it and cause UAC to not allow the file to be run even after adding it as an exception, sometimes it will flag it and the file will run fine after allowing it, and sometimes it won't block it at all, so it's wildly inconsistent and ranges from WAY overly authoritative to a complete miss, both of which are unacceptable to me). Even so, I think it would probably be good enough as just a backup layer, but between the crappy interface and usability, the fact it starts scans when I'm doing stuff, dragging the system down, and I can't stop them, the inconsistency and UAC-related issues, and the poor performance measurements/ratings it's received in testing, which I'm starting to think are the cause of some of the issues I experience, I intend to replace it. Since this setup would protect against known threats, exploits, and stuff running that wasn't explicitly run by the user, that only leaves two concerns: zero-days, which CF would in theory protect against and even without it isn't a big concern for me, and compromised certificates, which is the main threat I'm trying to find a way to protect against. Granted, such a situation is very rare, but it's also potentially devastating. I'm talking about instances like what happened with CCleaner a while back, and in fact that file is what I'm primarily using for testing, though it's known malware now so it's caught by signatures. But even if it wasn't, I can't figure out any way to protect against it, since AV software would automatically trust it due to the certificate. I was hoping a BB would still recognize that it was acting maliciously and block it, but I can't test that since almost all AV software comes with signatures. EAM is the only one that you can get without them, and when I tried that it didn't block it, which made sense since there wasn't a signature for it but was also disconcerting because the certificate was explicitly revoked by its issuer, which should have been a red flag. Anyways, unless I'm missing something, it seems this is a situation that can't really be protected against.
One BIG problem I have with CF is that even if I'm right about its benefit for unknowns, it sucks for software it considers malicious. For example, if it quarantines something, but you think it's safe and therefore want to run it anyway, you would still want to do so with extra precautions, such as containment. Unfortunately, once you restore the file from the quarantine and click yes when it asks if you want to add an allow containment rule, it sets the containment module to ignore it. This means you have to know, and remember, to go to the containment rules and edit it to partially limited or restricted (and of course there's the whole issue of restricted not working in W10 unless UAC is completely disabled, though CS says that can be done safely and also says it's not necessary because PL is good enough). This just seems crazy to me, that they would default to having it make an allow/ignore rule. It would make much more sense to have it default to a PL or restricted rule or, better yet, allow the default to be configured in settings or, better still, allow you to choose in the pop-up when removing an item from the quarantine, with it defaulting to whatever you choose in settings to have it default to. But having it default to allow for an item that was quarantined, which it should assume was for good reason, just seems irresponsible. And the same exact thing happens if you click the "Don't Isolate It Again" button on the notification when it's quarantined. This is bypassing the (AFAICT) best protection feature of CF. I suppose the "fix" for this would be to change the auto-containment rule for malicious items from block to run virtualized or run restricted*, but then it would always let them run. I'd prefer to have it quarantined, so it takes more thought, effort, and knowledge to actually run it. For example, if my mom were to try and run something that CF perceived as malicious, I'd prefer to not have it go ahead and run it, even restricted or virtualized, and instead quarantine it so that myself or my dad would have to take a look and make a much more informed decision as to whether it should be run or not.
*I'm not really clear on the difference between virtualization and restricted in CF. According to Umbra's myths & facts thread, they're basically the same thing, but when I installed a malware program restricted it's folder was located in program files, whereas when I installed it with virtualization it was under a CF virtualization folder. And in the first case, I was able to remove the restriction and essentially make it a normal app, whereas the virtualized install I can't seem to do this.
I'm also curious how you would be able to tell that an unknown app is malicious when it's running restricted, in order to know it's safe to remove the restrictions, because the whole point of them is to prevent it from being able to do anything malicious, and so you wouldn't be able to observe that behavior, which would make them appear innocuous.
I plan to run VoodooShield and/or SecureAPlus + OSArmor + SysHardener and possibly a third-party AV (I'm leaning toward Emsisoft paid or Bitdefender free/paid or possibly FortiClient free) since I absolutely don't want to use WD, though SAP might be enough, especially combined with CF. VS/SAP would provide a significant level of protection as anti-exes, not to mention VS's VoodooAI and cloud scanner and SAP's cloud scanner and local AV, and OSArmor and SysHardener would of course add more layers, and the third-party AV would basically be an extra level of protection in case the user simply clicks allow on those for something that's malicious, as an added layer for human error. And while there's a lot of conflicting opinions about WD, I personally don't think it's very good based on test results and I've had mixed results myself with it (testing with a malware file, sometimes it will flag it and cause UAC to not allow the file to be run even after adding it as an exception, sometimes it will flag it and the file will run fine after allowing it, and sometimes it won't block it at all, so it's wildly inconsistent and ranges from WAY overly authoritative to a complete miss, both of which are unacceptable to me). Even so, I think it would probably be good enough as just a backup layer, but between the crappy interface and usability, the fact it starts scans when I'm doing stuff, dragging the system down, and I can't stop them, the inconsistency and UAC-related issues, and the poor performance measurements/ratings it's received in testing, which I'm starting to think are the cause of some of the issues I experience, I intend to replace it. Since this setup would protect against known threats, exploits, and stuff running that wasn't explicitly run by the user, that only leaves two concerns: zero-days, which CF would in theory protect against and even without it isn't a big concern for me, and compromised certificates, which is the main threat I'm trying to find a way to protect against. Granted, such a situation is very rare, but it's also potentially devastating. I'm talking about instances like what happened with CCleaner a while back, and in fact that file is what I'm primarily using for testing, though it's known malware now so it's caught by signatures. But even if it wasn't, I can't figure out any way to protect against it, since AV software would automatically trust it due to the certificate. I was hoping a BB would still recognize that it was acting maliciously and block it, but I can't test that since almost all AV software comes with signatures. EAM is the only one that you can get without them, and when I tried that it didn't block it, which made sense since there wasn't a signature for it but was also disconcerting because the certificate was explicitly revoked by its issuer, which should have been a red flag. Anyways, unless I'm missing something, it seems this is a situation that can't really be protected against.
One BIG problem I have with CF is that even if I'm right about its benefit for unknowns, it sucks for software it considers malicious. For example, if it quarantines something, but you think it's safe and therefore want to run it anyway, you would still want to do so with extra precautions, such as containment. Unfortunately, once you restore the file from the quarantine and click yes when it asks if you want to add an allow containment rule, it sets the containment module to ignore it. This means you have to know, and remember, to go to the containment rules and edit it to partially limited or restricted (and of course there's the whole issue of restricted not working in W10 unless UAC is completely disabled, though CS says that can be done safely and also says it's not necessary because PL is good enough). This just seems crazy to me, that they would default to having it make an allow/ignore rule. It would make much more sense to have it default to a PL or restricted rule or, better yet, allow the default to be configured in settings or, better still, allow you to choose in the pop-up when removing an item from the quarantine, with it defaulting to whatever you choose in settings to have it default to. But having it default to allow for an item that was quarantined, which it should assume was for good reason, just seems irresponsible. And the same exact thing happens if you click the "Don't Isolate It Again" button on the notification when it's quarantined. This is bypassing the (AFAICT) best protection feature of CF. I suppose the "fix" for this would be to change the auto-containment rule for malicious items from block to run virtualized or run restricted*, but then it would always let them run. I'd prefer to have it quarantined, so it takes more thought, effort, and knowledge to actually run it. For example, if my mom were to try and run something that CF perceived as malicious, I'd prefer to not have it go ahead and run it, even restricted or virtualized, and instead quarantine it so that myself or my dad would have to take a look and make a much more informed decision as to whether it should be run or not.
*I'm not really clear on the difference between virtualization and restricted in CF. According to Umbra's myths & facts thread, they're basically the same thing, but when I installed a malware program restricted it's folder was located in program files, whereas when I installed it with virtualization it was under a CF virtualization folder. And in the first case, I was able to remove the restriction and essentially make it a normal app, whereas the virtualized install I can't seem to do this.
I'm also curious how you would be able to tell that an unknown app is malicious when it's running restricted, in order to know it's safe to remove the restrictions, because the whole point of them is to prevent it from being able to do anything malicious, and so you wouldn't be able to observe that behavior, which would make them appear innocuous.