All you have to do to mess with Webroot is to use a bunch of samples that it is no good at dealing with. Pick up any Virussign pack and there will be ancient samples in it that smash Webroot and the system because Webroot was smashed. The fanboys say such testing isn't realistic. Whatever. It's nothing more than justification because they live in denial.
Like
@davisd said, every single time Webroot performs dismally, it comes forward and states the product was mis-tested. For example, they have done this with MRG Effitas repeatedly. And even after MRG Effitas "fixed" the test, Webroot is still doing poorly.
But not to this extent, especially with respect to the other products tested. Someone mentioned the use of scripts to be the culprit, but this cannot be the case as a number of the other products tested I know are virtually oblivious to this class and they scored in the 90's. Something is wrong here (And Please, Please note that I am as far from a Webroot apologist as you can get!). Don't mean to harp on this, but I am honestly confused.
Also I suppose that any discussion of this particular test is an exercise in futility as SE Labs only speaks to their methodology in the most vague way possible- they do the test over 3 months (yearly quarter), they get their malware from AMTSO (which any subscriber can also get), and they run them against various products. Do they collect the malware for 30 days, run them monthly all at once and do this 3 times for the 3 month span? Do they collect the malware for 3 months then run the test? We just don't know as they don't tell us.
The one thing that we can be certain of is that this test is either not done daily and/or the malware they use is not D+1 or newer (actual things that a user will come across, since this stuff is what is actively being pushed out). Fresh malware (a really Real World scenario) would never yield such superlative results for the vast majority of products tested.
For me, a True Real World test would be:
1). We got these 10 samples from a honeypot, all undetectable 6 hours ago.
2). We made sure that they were malicious and all are different
3). We ran them against all of the products tested SIMULTANEOUSLY within the D+1 timeframe.
4). These are the results...
But it seems that the Pro testing sites would rather use older malware so that the overall results for the bulk of the products tested are over 90%. This may make the user of these products feel good, but they are also being put at risk due to such shoddy methodology.
Please just remember one very important thing- Malware being actively pushed out by the Blackhats are NOT OLD STUFF, yet this seems to be the malware used by the Pro Sites.
Remember Powershell is not malicious in itself- it acts as a trigger for the true payload, and it's not as if WR will allow any payload downloaded/installed by PS without subsequent checks. The use of Scripts/PS malware would not explain the results.
Yeah, I could construct a test that would trash WR without that much problem- the issue would be using the same malware files and getting the superlative results seen in some of the others. It's not that WR sucked in this test- it's that it sucked that much compared to the others. And the pathetic lack of info on the exact methodology (as you point out) really should make one question the legitimacy of this test, and SE Labs itself.
Over the past year, Webroot has performed poorly overall or in the detection\remediation sections of other AV lab tests. Over a period of 12 months, it establishes a pattern. A pattern of poor performance over multiple tests from different AV test labs says a whole lot more about the product than this single test.
Not just people, users that are intelligent enough to see all these tests are flawed. Many Developers from these companies in rare occasions admit, there is no sound testing methodology out there, because there are too many variables. If this is the case, what is the point of making these half-assed ones?People love to bitch about tests. And yes, there are variables and limiting factors. But testing is in fact ultimately the judge of a product. Testing ultimately fleshes out the best products and flushes out the worst products.
It does have meaning what is stated here. It means home users are not targeted like corporations and businesses and do not see the amount of malware/infections that get run through these tests. I could place Webroot on my system right now for a year, and guarantee i will do so without an infection the whole time.When companies start giving all sorts of excuses for poor test results... that has meaning. Sit up and listen. That has a lot of meaning.
And the Webroot rep at another website is at it now -- this is what he's saying about Webroot testing failure:
"If WSA was that bad we would be seeing complaints here and at the Webroot Community with many infections and we don't."
"...some others think that any of these testings organizations is the word from god. Oh well.... "
Pathetic.
To test a product, whether it be software, hardware, vehicles, you name it, you have to design the test around the design of the product to cover all variables. Not one of these sites do this.
AMSTO and its members debated testing issues to death and are the ones who came up with, and agreed upon, the current general testing standards. The whole point of the tests is to make valid comparisons (apples to apples) as is technically possible. The whole point of comparison testing is that each product must be comparable to the others. If one product is only an AV and the other is a HIPS, then there is no valid comparison. The general testing is comparing protection results of comparable modules.
Even to this day, certain publishers argue the labs aren't doing even the general testing right, but those very same publishers keep participating.
Most labs make publishers jump through hoops. So the publisher knows what and how things are going to be measured and reported beforehand. The publisher is agreeing to the testing methodology by enrolling in the testing.
Every once in a while you will see commissioned tests. Now those are more-or-less testing the features of the specific product. And those tests are a joke because the lab will compare a HIPS product to only an AV. That's an invalid test. Labs routinely make invalid comparisons in commissioned tests that are obviously rigged way in the favor of the commissioning publisher.
What people are wanting is to see test results of products A, B and C at maximum settings, everything within the product tested, and reported side-by-side with absolute results. A not applicable (N\A) entry in the test result chart where a product didn't have the feature or it couldn't be tested.
No one is going to pay for that testing. And then you will have publishers screaming "Foul, foul... unfair testing...". And probably law suits as was the case with Cylance's test lab & testing shennanigans.
The problem is that no publisher will agree to nor pay for comprehensive testing that reports absolute results on a comparison-basis. I don't think labs will do it.
I've tested Webroot to death against ancient malware - stuff that has been around and available for years - and my results mirror the test labs'. As far as my own testing, the product just isn't good. I've reported my test results many times over the years. They've known about it. And they would never fix it. This went on for years-and-years.
I have not doubt about what you are saying here. I also have no doubts none of the publishers would want their products tested correctly and fairly as they all claim to protect against new threats "zero day" and if tested correctly, they would all fail, and leave any and all potential customers asking "why should i pay for that"... But this, could turn into a very long debate that will spin tires effortlessly, and gain no traction as you have pointed out above has been the case for a very long time.
Im not trying to argue with anyone, just simply stating, if all their methodologies are not correct, and testing is not done so correctly, why does anyone even bother with these including the publishers, oh i remember because old samples are used and their products look good in that light.
Just a point trying to be made, so that some new/average users understand what they are looking at with these tests.
Unless they can code and morph samples, then what you watch there is inaccurate as well, not to mention most of those "youtubers" are done for ad revenue, and not constructed any where close as this test would/could have been.another lab test.. hmm.. im trust and take more decisive the test from " normal ppl " like us mean who tried on real danger things from internet on daily uses, basically youtube got better test.
Hopefully most users here have figured out that a shiny UI is of non importance, it will not matter how good that product looks if it sucks at what it claims to do. I will take a ugly UI any day if the product is good.I hope the WeBroot sets to it and the program is updated times. Not only the database and the security but generally the design, I think in 2018 no one likes to use a design that looks like 2013 ^^
Hopefully you will not take this personally, but one can get "infected" with any product if they have poor habits and are risky, products are not able to protect a user from themselves, that is why knowledge is needed as well as security.i got a virus after a small amount of time with webroot.
Not just people, users that are intelligent enough to see all these tests are flawed. Many Developers from these companies in rare occasions admit, there is no sound testing methodology out there, because there are too many variables. If this is the case, what is the point of making these half-assed ones?
i got a virus. it took over my firefox. it did it's damage in less than 30 seconds. it broke my internet connection. webroot did not blink.
when i had webroot, the rollback feature did not work. the sandbox may have worked the 1st couple of times.