Security News SE Labs Consumer protection June 2018

Brie

Level 10
Verified
Well-known
Jan 1, 2018
493
when i had webroot, the rollback feature did not work. the sandbox may have worked the 1st couple of times.
 
  • Like
Reactions: Burrito
I

illumination

All you have to do to mess with Webroot is to use a bunch of samples that it is no good at dealing with. Pick up any Virussign pack and there will be ancient samples in it that smash Webroot and the system because Webroot was smashed. The fanboys say such testing isn't realistic. Whatever. It's nothing more than justification because they live in denial.

Like
@davisd said, every single time Webroot performs dismally, it comes forward and states the product was mis-tested. For example, they have done this with MRG Effitas repeatedly. And even after MRG Effitas "fixed" the test, Webroot is still doing poorly.

Indeed, but clearly this is not one of those times, anyone with an ounce of common sense can see this test is extremely flawed. Of course, they all are, i have yet to see a clearly defined methodology or sample verification. Im not biased to any product, i have slammed all these so called professional tests for a long time now, pointing out the same very things that have been pointed out here, they all do it. These posted threads should not be allowed at this point, not if the advanced users here would like to help average users, as these test surely do not.

While we are on this subject, i would like to point something else out. Maybe, just maybe if all the IT pro's and advanced users would stop fighting and belittling each other for the sake of a name in the business and actually start working together, we might all just find a way to balance the difference between good and bad, because, as of right now, the bad guys certainly are a few steps ahead... constantly...
 
  • Like
Reactions: harlan4096
5

509322

But not to this extent, especially with respect to the other products tested. Someone mentioned the use of scripts to be the culprit, but this cannot be the case as a number of the other products tested I know are virtually oblivious to this class and they scored in the 90's. Something is wrong here (And Please, Please note that I am as far from a Webroot apologist as you can get!). Don't mean to harp on this, but I am honestly confused.

Also I suppose that any discussion of this particular test is an exercise in futility as SE Labs only speaks to their methodology in the most vague way possible- they do the test over 3 months (yearly quarter), they get their malware from AMTSO (which any subscriber can also get), and they run them against various products. Do they collect the malware for 30 days, run them monthly all at once and do this 3 times for the 3 month span? Do they collect the malware for 3 months then run the test? We just don't know as they don't tell us.

The one thing that we can be certain of is that this test is either not done daily and/or the malware they use is not D+1 or newer (actual things that a user will come across, since this stuff is what is actively being pushed out). Fresh malware (a really Real World scenario) would never yield such superlative results for the vast majority of products tested.

For me, a True Real World test would be:

1). We got these 10 samples from a honeypot, all undetectable 6 hours ago.
2). We made sure that they were malicious and all are different
3). We ran them against all of the products tested SIMULTANEOUSLY within the D+1 timeframe.
4). These are the results...

But it seems that the Pro testing sites would rather use older malware so that the overall results for the bulk of the products tested are over 90%. This may make the user of these products feel good, but they are also being put at risk due to such shoddy methodology.

Please just remember one very important thing- Malware being actively pushed out by the Blackhats are NOT OLD STUFF, yet this seems to be the malware used by the Pro Sites.

The labs are never going to give the step-by-step details of what and how they do it. It's not in their best interests. And they don't care, and never will care, what people on security soft forums say. People can complain until the Earth falls into the sun and nothing is going to change.

AV lab tests are only partially about product protection validation. Their greater purpose is for marketing. And if the AV labs were to test products to show all their serious flaws, no publisher would use their services and they go bankrupt.

The rationale is that older malware is what people are likely to face in user-land. So that is why they use it predominantly. Plus, they have a hard time testing true new ("zero day" misnomer) malware. For example, in Germany, it would be illegal to modify existing malware to make it undetectable or for other testing purposes.

It doesn't matter what security soft geeks think. What matters are those typical users who rely upon the AV lab test results to select a product. And the current test methodology and reporting must sell softs for the publishers, otherwise the publishers wouldn't pay to be included in the testing.
 
Last edited by a moderator:
5

509322

Remember Powershell is not malicious in itself- it acts as a trigger for the true payload, and it's not as if WR will allow any payload downloaded/installed by PS without subsequent checks. The use of Scripts/PS malware would not explain the results.

Yeah, I could construct a test that would trash WR without that much problem- the issue would be using the same malware files and getting the superlative results seen in some of the others. It's not that WR sucked in this test- it's that it sucked that much compared to the others. And the pathetic lack of info on the exact methodology (as you point out) really should make one question the legitimacy of this test, and SE Labs itself.

Webroot does not yet check scripts to the extent of other products. They are only now testing close script inspection in beta.
 
Last edited by a moderator:
5

509322

Over the past year, Webroot has performed poorly overall or in the detection\remediation sections of other AV lab tests. Over a period of 12 months, it establishes a pattern. A pattern of poor performance over multiple tests from different AV test labs says a whole lot more about the product than this single test.

The data is there for anyone who wishes to look at it. Then use your own judgment. It's not too difficult when 1 + 1 = 2.
 

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363
Over the past year, Webroot has performed poorly overall or in the detection\remediation sections of other AV lab tests. Over a period of 12 months, it establishes a pattern. A pattern of poor performance over multiple tests from different AV test labs says a whole lot more about the product than this single test.

Webroot has gotten crushed in testing since its inception. After a notable AV-C crushing, Webroot effectively stomped off like a baby and declared that they would not participate in testing anymore.

And Lockdown's point is key. Webroot gets crushed regularly. They have occasional good test results which makes it confusing for consumers. But they regularly get crushed. Conversely, the best products typically do well in most tests. It's very rare to see Kaspersky, Bitdefender, or Norton bomb a test. It's not a conspiracy.... it's because they are generally better products. Oh, and when Webroot does well in a test... then testing is apparently ok after all at Kool-Aidville, the Webroot forums.

In an MRG test that was designed for Webroot, they finished in the bottom half. They let the AV have 24 hours to detect / rollback...
Check out the Webroot response (non-response) right here:
Solved: Does Webroot have a response for the test results described in the Q3 2016 ... - Webroot Community

Check out this Webroot result from MRG where they finished dead last behind ThreatTrack and Windows Defender:
https://www.mrg-effitas.com/wp-content/uploads/2018/03/MRG-Effitas-360-Assessment_2017_Q4_wm.pdf

In a very rare admission of failure reference the above-linked test, a Webroot employee stated: "I have no information that would indicate that the results were not accurate."
[Discussion] - Antimalware testing is hard, disputing a flawed test is even... - Page 3 - Webroot Community

===========

All the tests are data points. They all have meaning. Particularly the tests governed by AMTSO, which sets the 'ground rules' for testing. Webroot is a member of AMTSO.

People love to bitch about tests. And yes, there are variables and limiting factors. But testing is in fact ultimately the judge of a product. Testing ultimately fleshes out the best products and flushes out the worst products.

When companies start giving all sorts of excuses for poor test results... that has meaning. Sit up and listen. That has a lot of meaning.

And the Webroot rep at another website is at it now -- this is what he's saying about Webroot testing failure:
"If WSA was that bad we would be seeing complaints here and at the Webroot Community with many infections and we don't."
"...some others think that any of these testings organizations is the word from god. Oh well.... "


Pathetic.
 
  • Like
Reactions: Deleted member 178
I

illumination

People love to bitch about tests. And yes, there are variables and limiting factors. But testing is in fact ultimately the judge of a product. Testing ultimately fleshes out the best products and flushes out the worst products.
Not just people, users that are intelligent enough to see all these tests are flawed. Many Developers from these companies in rare occasions admit, there is no sound testing methodology out there, because there are too many variables. If this is the case, what is the point of making these half-assed ones?

When companies start giving all sorts of excuses for poor test results... that has meaning. Sit up and listen. That has a lot of meaning.

And the Webroot rep at another website is at it now -- this is what he's saying about Webroot testing failure:
"If WSA was that bad we would be seeing complaints here and at the Webroot Community with many infections and we don't."
"...some others think that any of these testings organizations is the word from god. Oh well.... "


Pathetic.
It does have meaning what is stated here. It means home users are not targeted like corporations and businesses and do not see the amount of malware/infections that get run through these tests. I could place Webroot on my system right now for a year, and guarantee i will do so without an infection the whole time.

Bottom line... To test a product, whether it be software, hardware, vehicles, you name it, you have to design the test around the design of the product to cover all variables. Not one of these sites do this.
 
  • Like
Reactions: ForgottenSeer 72227
5

509322

To test a product, whether it be software, hardware, vehicles, you name it, you have to design the test around the design of the product to cover all variables. Not one of these sites do this.

AMSTO and its members debated testing issues to death and are the ones who came up with, and agreed upon, the current general testing standards. The whole point of the tests is to make valid comparisons (apples to apples) as is technically possible. The whole point of comparison testing is that each product must be comparable to the others. If one product is only an AV and the other is a HIPS, then there is no valid comparison. The general testing is comparing protection results of comparable modules.

Even to this day, certain publishers argue the labs aren't doing even the general testing right, but those very same publishers keep participating.

Most labs make publishers jump through hoops. So the publisher knows what and how things are going to be measured and reported beforehand. The publisher is agreeing to the testing methodology by enrolling in the testing.

Every once in a while you will see commissioned tests. Now those are more-or-less testing the features of the specific product. And those tests are a joke because the lab will compare a HIPS product to only an AV. That's an invalid test. Labs routinely make invalid comparisons in commissioned tests that are obviously rigged way in the favor of the commissioning publisher.

What people are wanting is to see test results of products A, B and C at maximum settings, everything within the product tested, and reported side-by-side with absolute results. A not applicable (N\A) entry in the test result chart where a product didn't have the feature or it couldn't be tested.

No one is going to pay for that testing. And then you will have publishers screaming "Foul, foul... unfair testing...". And probably law suits as was the case with Cylance's test lab & testing shennanigans.

The problem is that no publisher will agree to nor pay for comprehensive testing that reports absolute results on a comparison-basis. I don't think labs will do it.

I've tested Webroot to death against ancient malware - stuff that has been around and available for years - and my results mirror the test labs'. As far as my own testing, the product just isn't good. I've reported my test results many times over the years. They've known about it. And they would never fix it. This went on for years-and-years.
 
Last edited by a moderator:
I

illumination

AMSTO and its members debated testing issues to death and are the ones who came up with, and agreed upon, the current general testing standards. The whole point of the tests is to make valid comparisons (apples to apples) as is technically possible. The whole point of comparison testing is that each product must be comparable to the others. If one product is only an AV and the other is a HIPS, then there is no valid comparison. The general testing is comparing protection results of comparable modules.

Even to this day, certain publishers argue the labs aren't doing even the general testing right, but those very same publishers keep participating.

Most labs make publishers jump through hoops. So the publisher knows what and how things are going to be measured and reported beforehand. The publisher is agreeing to the testing methodology by enrolling in the testing.

Every once in a while you will see commissioned tests. Now those are more-or-less testing the features of the specific product. And those tests are a joke because the lab will compare a HIPS product to only an AV. That's an invalid test. Labs routinely make invalid comparisons in commissioned tests that are obviously rigged way in the favor of the commissioning publisher.

What people are wanting is to see test results of products A, B and C at maximum settings, everything within the product tested, and reported side-by-side with absolute results. A not applicable (N\A) entry in the test result chart where a product didn't have the feature or it couldn't be tested.

No one is going to pay for that testing. And then you will have publishers screaming "Foul, foul... unfair testing...". And probably law suits as was the case with Cylance's test lab & testing shennanigans.

The problem is that no publisher will agree to nor pay for comprehensive testing that reports absolute results on a comparison-basis. I don't think labs will do it.

I've tested Webroot to death against ancient malware - stuff that has been around and available for years - and my results mirror the test labs'. As far as my own testing, the product just isn't good. I've reported my test results many times over the years. They've known about it. And they would never fix it. This went on for years-and-years.

I have not doubt about what you are saying here. I also have no doubts none of the publishers would want their products tested correctly and fairly as they all claim to protect against new threats "zero day" and if tested correctly, they would all fail, and leave any and all potential customers asking "why should i pay for that"... But this, could turn into a very long debate that will spin tires effortlessly, and gain no traction as you have pointed out above has been the case for a very long time.

Im not trying to argue with anyone, just simply stating, if all their methodologies are not correct, and testing is not done so correctly, why does anyone even bother with these including the publishers, oh i remember because old samples are used and their products look good in that light.

Just a point trying to be made, so that some new/average users understand what they are looking at with these tests.
 
5

509322

I have not doubt about what you are saying here. I also have no doubts none of the publishers would want their products tested correctly and fairly as they all claim to protect against new threats "zero day" and if tested correctly, they would all fail, and leave any and all potential customers asking "why should i pay for that"... But this, could turn into a very long debate that will spin tires effortlessly, and gain no traction as you have pointed out above has been the case for a very long time.

Im not trying to argue with anyone, just simply stating, if all their methodologies are not correct, and testing is not done so correctly, why does anyone even bother with these including the publishers, oh i remember because old samples are used and their products look good in that light.

Just a point trying to be made, so that some new/average users understand what they are looking at with these tests.

It's unfortunate, but there needs to be a guide "How to Interpret AV Test Lab Results - What They Say and Don't Say."

It's also unfortunate that the only ones who would ever bother to read it are security soft geeks.

:X3:
 

dJim

Level 5
Verified
Well-known
Mar 12, 2016
250
another lab test.. hmm.. im trust and take more decisive the test from " normal ppl " like us mean who tried on real danger things from internet on daily uses, basically youtube got better test.
 
  • Like
Reactions: vtqhtr413
I

illumination

another lab test.. hmm.. im trust and take more decisive the test from " normal ppl " like us mean who tried on real danger things from internet on daily uses, basically youtube got better test.
Unless they can code and morph samples, then what you watch there is inaccurate as well, not to mention most of those "youtubers" are done for ad revenue, and not constructed any where close as this test would/could have been.
 

Question

Level 3
Verified
Jun 22, 2018
137
I hope the WeBroot sets to it and the program is updated times. Not only the database and the security but generally the design, I think in 2018 no one likes to use a design that looks like 2013 ^^
 
I

illumination

I hope the WeBroot sets to it and the program is updated times. Not only the database and the security but generally the design, I think in 2018 no one likes to use a design that looks like 2013 ^^
Hopefully most users here have figured out that a shiny UI is of non importance, it will not matter how good that product looks if it sucks at what it claims to do. I will take a ugly UI any day if the product is good.
 
F

ForgottenSeer 58943

It's good to see the end of the Webroot Kool-Aid days with Triple Herxheimer even throwing in the towel over there.

Webroot is and always has been smoke and mirrors, it might have been 'semi' relevant years ago, but today not so much. I knew someone in the family that loved Webroot. I'd always go over after cries for help and find his machine hammered by malware. Herxheimer told me it can't be malware, Webroot wouldn't fail me and recommended I work with support. So over many days I worked with support, the final outcome was 'These files and malware are all really harmless, we didn't detect them because we don't view them as harmful.'. Explain that to the Webroot user I was helping where his browser was being actively injected on each page and his startup had a new friend arrive each day.

If anyone is shocked that Webroot is junk hasn't been paying attention - for years - and needs to lay off the green kool-aid.
 

Burrito

Level 24
Verified
Top Poster
Well-known
May 16, 2018
1,363
Not just people, users that are intelligent enough to see all these tests are flawed. Many Developers from these companies in rare occasions admit, there is no sound testing methodology out there, because there are too many variables. If this is the case, what is the point of making these half-assed ones?

You are engaging in mindless test bashing. Yes, tests have constraints and limitations. But if you read the methodology from multiple test organizations... it's not that bad. Talk with Andreas from AV-C offline. He'll give you an education.


i got a virus. it took over my firefox. it did it's damage in less than 30 seconds. it broke my internet connection. webroot did not blink. :(

Yeah, the testing is all not inaccurate. Webroot Kool-Aid has a lot of sucktitude.


when i had webroot, the rollback feature did not work. the sandbox may have worked the 1st couple of times.

And that's the thing. They claimed that rollback fixed everything. But it never worked quite correctly.... and it didn't fix everything. If it was that great, why are they building new modules for Webroot now?
 
  • Like
Reactions: Brie and vtqhtr413

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top