The trust issue is really interesting me. The poll of Umbra relating to CCleaner backdoor shows that 50% has lost faith in Piriform.
Taking into account that this forum is a gathering of security aware PC users. I am really curious how this incident will fire back on the trust of average PC users. When CCleaner download numbers decline the value of Piriform for Avast reduces with the same percentage.
I have discussed VoodooShield with Ondrej of Avast and use VS AI for business market because Microsoft's Windows Defender is really becoming a strong contender in the free AV consumer market.
Avast did a test with VS and said it had to many False Positives for the consumer market. We (Dan and I) said they should not use white listing for consumer market, but for business/corporate market.
Ondrej was convinced that with third party products (like CCleaner)they could attract enough users for Avast AV. The CCleaner incident may hurt their strategy.
Fun fact (you can ask Dan to verify): 20% of the False Positives were actually malware when Dan did manual analysis.
The Avast guys are great… very smart and a lot of fun to communicate with. I think we could have done some amazing things together, if we would have come to a consensus on a few key items.
Speaking in general… very general, here is what I believe is wrong with the AV industry. From what I understand, the majority of the AV industry is preoccupied with limiting false positives as much as possible.
That would be perfectly fine, except…
1. For some odd reason, AV companies would like to find a way to render a safe verdict on all keyloggers, cracks, patches, etc. There are already several for VS, and I could care less, but either way, these types of files show malicious intent just by their very nature. We should not have to retrain our ML/Ai machines to call these types of files safe, when the authors have proven themselves to not be trustworthy.
2. When AV companies analyze files, they are executed in a sandbox / malware analysis system, and a lot of times the sandbox does not have the necessary dependencies to actually execute the malicious code. As a result, the file is determined to be safe, when in fact it is not. If the payload did not explode in the sandbox due to a missing dependency, that certainly does not mean that the file is safe.
3. Probably the most important is this… when AV test labs test AV products, they test using KNOWN samples.
What about the unknown malware? Sure, Ai is pretty good with detecting unknown malware… but what if there were a whole class of malware that everyone is missing?
Just lock the computer when it is at risk.