Slyguy

Level 43
Verified
Certainly this discussion has come up before.

However I think we need to look at why testing, all testing, including AV testing should be viewed as questionable.

Companies/corporations/governments, given the opportunity will game tests, alter products/services, and otherwise compensate to ensure their passing of tests in most cases. We've seen this before with AV testing, the reported strangeness on early Webroot testing that seemed to point to malware engineers activity monitoring and adjusting the product for the test. Kaspersky has been discussed as having many people in positions to ensure test success. 360 was caught changing their product when it detected testing. I'm sure there are many other cases.

However in the real world we see this happen constantly. Everything from VW tweaking their computers to recognize testing and improving results to firms cutting holes into converters to pass emission tests.

More recently, you will find something called 'Passenger Small Overlap Crash Test'. What was discovered is that many automakers were gaming the crash test system by reinforcing the driver side to pass the test which was ONLY focused on testing the drivers side, while completely removing most of the safety features from the passenger side. They did this knowing the NTSB wasn't testing frontal small overlap on the passenger side. Until the NTSB started testing it and found absolutely horrifying conditions - essentially - almost all 40MPH or greater passenger front small overlaps would be fatalities. Ford Motor Company for example reinforced the drivers side, actually removing metal and safety mechanisms from the passenger side. The result is horrifying;


While some manufacturers. (Honda, Kia, Subaru, Mazda and some others) actually strengthened both sides at the same time.

I think I make a logical case for well known public testing of security products/software to be taken with a nearly 100% grain of salt. I'd venture to say almost all public testing should be regarded as extremely suspect and individual testing (if feasible) will likely provide better insight for a products effectiveness. How a product protects you based on what you do, what you use, and how you have it configured is likely going to trump any industry level test IMO.
 
Last edited:

Back3

Level 2
I like the Consumer Reports way of doing things: they combine members' feedback with trends and technical data from government,academia and industry to find the connecting threads under the numbers.Their tests also reflect how consumers use a product. I always consult their data before buying or leasing a new car.
 

Andy Ful

Level 49
Verified
Trusted
Content Creator
Making reliable AV tests is impossible, because of enormous costs. The well known tests are based (by default) on doubtful statistics. For now, any such test is similar to randomly choosing 5 people in the town (including the dead), on random time of the day, and claiming that the proportion of children in this sample is close to that of actually living in the town. :giggle:
Even If one would look at the average results of several tests, then the statistics cannot be perfect (because of the dead).(y)
 
Last edited:

Cortex

Level 12
Leo for just one example will give a single test based on his samples on that day & thereby give the AV a good rating or totally trash it, the fact it did well or badly the year before matters not, what happens on that single test is all that counts - That of course is a simplistic comment on his tests. The really sad thing is users will them base whether a AV is useful or should be ignored based on a single test. Much the same applies in the rest of life, I feel you have to own a car for a while to give an evaluation of it in various situations, this is not possible on a 30 min test drive?
 

monkeylove

Level 3
I rely on results from several tests because I can't do my own or work on anecdotes. Meanwhile, I look at what happens to various users given the news about malware.

Finally, the best I can do is to install and try free versions of various AV and see which one doesn't slow down file browsing in my machine. For now, it's the free version of KSC.
 

Andy Ful

Level 49
Verified
Trusted
Content Creator
I rely on results from several tests ...
That is reasonable if one remembers that the results are by default only approximate. So, for example, if your lovely AV gets average scoring in the single test it does not necessarily mean that something is wrong with it.
Furthermore, the tests can be helpful for AV vendors to check if there are some weak points in their protective technology.
 
Last edited:

Burrito

Level 21
Verified
Testing is THE basis for the comparison of products.

Oh sure, there are artificialities and limitations to testing. True.

But funny thing…. The test results at all the credible labs tend to correlate over time. Correlate the concept, not @Correlate the fine participant here at MT.

Over the past decade, Kaspersky, Norton, and Bitdefender often occupied top spots no matter the test organization – SE Labs, AV Test, AV-Comparative, Denis Labs, MRG…

And often, those results correlate with our homegrown tests in Malware Hub.

Conversely, the worst products tend to finish at the bottom – no matter the tester. In recent years, it does not matter who did the testing, Webroot, Malwarebytes, McAfee (a few exceptions recently) occupied the bottom of the barrel.

What is the alternative to testing as the basis of judgment of a product?

The worst basis is personal testimonial.

“I’ve used SuperAntispyware since I was in diapers and I’ve never been infected.” Uh huh. And some people use no security and have also never been infected. Does that prove the method?

I once read that in Jamaica, some literally get a witch doctor to cast a spell on the computer to ward off evil spirits and malware. And some swear by it. They’ve allegedly never been infected. The personal testimonials may be very powerful. Maybe we should test that method in The Hub.

Personal testimonial is the method of multi-level marketing, cults, and the less judicious.

Most enterprise security software relies on testing of some form to sell the product. When companies shell out big $$$ for software, there needs to be a reasoned basis for the purchase.

Those vendors that avoid tests usually have good reason to do so.

It’s popular for many to nitpick and bash tests.

Ok, if tests should be taken with a grain of salt or discounted --- what’s your basis for empirical, repeatable, and verifiable assessment of products?
 
Last edited:

Raiden

Level 13
Verified
Content Creator
Testing is THE basis for the comparison of products.

Oh sure, there are artificialities and limitations to testing. True.

But funny thing…. The test results at all the credible labs tend to correlate over time. Correlate the concept, not @Correlate the fine participant here at MT.

Over the past decade, Kaspersky, Norton, and Bitdefender often occupied top spots no matter the test organization – SE Labs, AV Test, AV-Comparative, Denis Labs, MRG…

And often, those results correlate with our homegrown tests in Malware Hub.

Conversely, the worst products tend to finish at the bottom – no matter the tester. In recent years, it does not matter who did the testing, Webroot, Malwarebytes, McAfee (a few exceptions recently) occupied the bottom of the barrel.

What is the alternative to testing as the basis of judgment of a product?

The worst basis is personal testimonial.

“I’ve used SuperAntispyware since I was in diapers and I’ve never been infected.” Uh huh. And some people use no security and have also never been infected. Does that prove the method?

I once read that in Jamaica, some literally get a witch doctor to cast a spell on the computer to ward off evil spirits and malware. And some swear by it. They’ve allegedly never been infected. The personal testimonials may be very powerful. Maybe we should test that method in The Hub.

Personal testimonial is the method of multi-level marketing, cults, and the less judicious.

Most enterprise security software relies on testing of some form to sell the product. When companies shell out big $$$ for software, there needs to be a reasoned basis for the purchase.

Those vendors that avoid tests usually have good reason to do so.

It’s popular for many to nitpick and bash tests.

Ok, if tests should be taken with a grain of salt or discounted --- what’s your basis for empirical, repeatable, and verifiable assessment of products?
Good points!

Some may see me as anti-testing, but I really am not. For me it's all about balance. Testing does have a place like you said, but it too still needs to be taken with a grain of salt at the same time. The problem is, at least the way I see it, the interpretation of the results is a big problem IMO.

Most people just look at the total , or the graphs with the most "green," but they don't take the time to understand all the nitty gritty of it all. A good example is the FP topic that comes up when discussing WD and AV-comparatives. Sure the total number is high, would be nice to see it lower, but if one looks at the appended chart, they would see that the bulk of the FP comes from files with very low to low prevalence. Doesn't mean we disregard the result, but it helps one understand what's going on.

YouTube videos also aren't 100% reliable l, as the testing methodology used isn't representative of how one uses a computer. Its further compounded when the reviewer doesn't know how a product works, or what a particular feature is designed to do.

Furthermore theres also the questions of how old the sample are, which most testing organizations don't reveal.

At the end of the day, testing is good, but it also needs to be taken with a grain of salt. Many times tests don't always translate to what's happening in the real world. Like @Andy Ful said, testing these programs isn't as easy as some make it out to be. Usage, attack vectors, the complexity of the malware, etc... all make it very difficult to test accurately.

My main gripe with testing in general, isn't that it's not insightful, it's that people use them to base their decisions on, but ignore everything else. Most of them assume that if they get a product that scores 100%, they will never be infected no matter what they do. Meanwhile things like educating proper/safe computing habits, something that will make a big difference as well, is often forgotten in all of this.;)

As I always say, testing is good and all, but it's not the end all be all as well.:)(y)
 
Last edited: