Well, I asked for
facts, data, Frank, and even as I got round to finishing my first reply, I realised you’ve already given them us. You’ve kindly provided us with a link to the MRG report. You’ve made it easy for us so I have no excuse now.
But whoaaaahh!! Hang on a moment. You can’t just present us a pretty picture from the report, and then shout, “Doom and gloom!” because the colours don’t look good. Context is very important. First, we need to look at the small print. Then we need to evaluate the results of the test in the light of the particular methodology of that AV product and also the particular properties of its multi-layered defences.
My competency in this area is with Webroot so I will limit myself only to this product
--------------------------------------------------------------------------------------------------------------------------------
Let’s take the first page:
all the ITW samples (329 of them). This is the page you captured and uploaded to your post. Just looking at the picture and ignoring everything else, it looks alarming for Webroot. It even had me alarmed…just for a moment: that is, until I opened the report and carefully studied it. Please note: I am not an IT expert at all, just someone who knows the AV product I use, and who tries to read the whole lab report and understand the results in the light of the nature of that product. The same applies of course for all of the AV products, but I don’t know the other ones well and so I limit myself to the one I do know.
Looking at this report, the first thing I notice is a mistake that MRG has made (albeit the only mistake I have found—so far). They say that the
“table is sorted by smallest amount of failures”. But this means that Webroot should not be in second to last place but next to Microsoft as the “Miss” rate is 0.30%, less than ESET and McAfee and equal to Microsoft. Also, this means that Webroot missed just 1 of the 329 samples. Not a disaster by a long stretch. And I will come back to that “miss” in a moment.
What else do we discover? We find that Webroot is monitoring and journaling some of the malware samples before making a determination, just as it is designed to do: in this case 88 of them—which it subsequently determined to be bad. But is it 88? Or is it 89? The one that it “missed”: Would it have “missed” it had the period been 25 hours instead of 24? Or 36 hours? etc. I assume that Webroot was still monitoring that file at the end of that 24-hour period.
Now you might not agree with Webroot’s methodology (I know people don’t here
), but you have to agree that it is behaving precisely as it is designed to do. And that the “miss” of 1 out of 329 malicious files/processes is maybe, depending of course on your point of view, not a “miss” at all.
I would go one step further. I said that Webroot should have been placed next to Microsoft. I would imagine that you assumed that was to the right of Microsoft as Microsoft makes far less determinations in the 24 hour period following insertion of these malicious files into the machine and far more immediate determinations. But I would disagree. I do not know Windows Defender in depth but I assume that, unlike Webroot, it has two not three classifications: Good and Bad. I assume it does not have an Unknown classification that automatically triggers close monitoring and journaling—not to speak of imposing highly restricted privileges for that Unknown file. And if this is true, this presumably means that those malicious files on the machine protected by Windows Defender have free rein to do whatever harm they wish, up and until they are determined as bad. I would therefore put Webroot very much
to the left of Microsoft.
Also please don’t forget: it is very possible that that “missed” file (1 out of 329) was maybe not really missed at all.
-----------------------------------------------------------------------------------------------------------------------------------------------
Let’s move to the second chart
Not much to speak of here.
The only thing I would express is my surprise that there are only TWO ransomware samples. I would have preferred rather more.
-----------------------------------------------------------------------------------------------------------------------------------------------
Chart 3
Financial Malware
Once again, Webroot is behaving exactly as it is designed to do. And once again, I would place Webroot to the left of Microsoft.
And please note that, according to Webroot’s paradigm, it has successfully detected 100% of the malicious samples.
-----------------------------------------------------------------------------------------------------------------------------------------------
Chart 4.
PUAs/Adware
Here I have an issue. So incidentally do most of the helpers at the Webroot Community Forum. Although Webroot has become somewhat more proactive regarding PUAs than hitherto, in our opinion not enough so!
Maybe it’s a question of priorities for Webroot(1). After all, it
is true that PUAs are not malicious per se. But they
can be a confounded nuisance. And Webroot’s ambivalent attitude to them could in the long term affect customers’ perception of their product.
-----------------------------------------------------------------------------------------------------------------------------------------------
Chart 5.
Fileless exploits.
This looks bad…at first blush. But wait a moment. What was the point of entry for these exploits?
“Some URLs come from our regular honeypots” (p.7). Is this where the (three) exploits came from? In real life, dodgy stuff comes from dodgy URLs. And the Webroot Web Threat Shield is particularly good at singling out those dodgy URLs (even Umbra admits that lol—see his post above regarding this). And I believe that the Webroot BrightCloud bots revisit each website
every 24 hours (!!!) to search for any negative change in a URL status. I therefore doubt that, in a real life situation, these exploits would have got through.
Incidentally, as most people know, Webroot is currently developing and beta-testing an anti-exploit module that will even further strengthen protection against this threat.
-----------------------------------------------------------------------------------------------------------------------------------------------
Chart 6.
FPs
Webroot scored 0.10% false blocks. According to my calculations, that is
one false block out of 997 samples.
Is that a big drama? You be the judge.
-----------------------------------------------------------------------------------------------------------------------------------------------
Conclusion.
According to MRG’s criteria (p.5), AV products must make "initial" detection to make it to Level 1 Certification. That automatically rules Webroot out of Level 1 due to its particular methodology.
Given the results and Webroot’s way of working, I am very satisfied with this report for Webroot (bar the PUAs).
(1) also btw economics: potential ruinously costly lawsuits by those pesky PUA makers