bazang
Level 6
- Jul 3, 2024
- 298
All tests are specific and\or contrived whether performed by a security software test lab, a researcher, or an enthusiast. As long as the premise of the test is sound, then the test itself is valid.Precisely. All tests have to be taken with a grain of salt.
Insurance companies cover a lot of things in their policies that organizations and individuals will never suffer a financial loss from, but the insurance company keeps including them and charging the policy owners for that very-unlikely-to-happen coverage. It is 100% profit for the insurance company. The insurance underwriters argument is "If it can happen, then it should be covered [and you should have to pay for that coverage whether or not you want it].)
This is how any security software and testing of it should be viewed. If something is within the realm of possibility, then a test showing that potentiality is valid. It matters not if one considers it "real-world" or not. Software publishers use the "Not real-world" argument to 1) dismiss or discredit a test result and 2) as the justification for not fixing demonstrated weaknesses or vulnerabilities.
Lots of security software do very well in "real-world" tests and yet they fail on a daily basis in the real-world incidents against "real-world-in-the-wild" threats. Those products that are widely acclaimed among the user base as "The best-of-the-best-of-the-best, Sir!" fail to protect hundreds of thousands of systems. And then there are those millions of users who never get infected because - regardless of the system configuration - they do not do the things that get others infected. That truth does not in any way discredit nor diminish any sound test results. Tests are assessments of "What ifs, corner cases, and abstractions of stupid human behaviors, misconfigurations, weaknesses & vulnerabilities."
The categorization of any tests as "real-world" is actually a misnomer. Because all security tests are fabricated or contrived, no matter who does them nor the underlying protocols or methods. AV test lab methodology is only an approximation of what a typical security software user would experience against the average security threat.
The term "real-world" and "360 Assessments" as a "test methodology" or suite of tests was done to quash complaints by security software publishers that the testing was not showing their product features in the best light. The babies cried "Foul! Not fair! Not fair!" So labs came up with jingaling-jingaling marketing labels for their tests. This made their sensitive clients happy because it provided tests named and designed to provide the "proof" that they are quality security software where the publisher can state "You are protected." It's 100% marketing driven - and Microsoft itself is mostly responsible for why this kind of testing and marketing exists.
Security software developers design their products around a set of features they believe to be the best way to protect against threats. Any test that does not show off these features to the publisher's satisfaction, that publisher will consider "invalid" and do everything they can to discredit the test results. Or, no matter what, the publisher - when it comes down to it - will place the blame on the user with the predictable arguments "The user did something that is not covered by the product, the user did not understand the product, the user misconfigured the product, the user selected "Allow," users do not look at advanced settings to increase protections to cover this case, etc."
Unfortunately, all the test labs have caved-in to these publisher complaints and created test protocols that are acceptable to the security software publishers - who pay the lab money. Any entity that derives its living from "Clients" is going to cater to those clients in order to keep those clients happy and the revenue inflows going. This is not to say that the testing is not well designed, fundamentally compromised by "profit before accurate test results" or similar. It just means AV Labs are not going to perform any tests that will bypass every single security product. They will not assess the products in a way that goes beyond security software publisher accepted "vanilla" testing. Anything outside of that, those publishers will cry "Invalid!"
The best, most accurate testing are independent enthusiasts that find ways to bypass specific security features. This is where you get a clear and honest demonstration that, if you understand the demonstration, that realize "What the security software publisher says just ain't true or it is not entirely true." For those that have greater insight they realize that a test is specificity. It might even be purpose-built to show weakness in one software and that the weakness does not exist in the other. That demonstration does not invalidate what is being demonstrated.
Google's Project Zero operates on this basis. Tavis Ormandy has been notorious for ignoring security software publisher and security software enthusiast complaints that his findings are not valid. His reply to any detractors has always been: "F*** O**. The test results are accurate and what I am saying is the truth.
It is unfortunate, but there are those who automatically assume that a person's preference for one security software over any other automatically makes their demonstrations nefarious or wrongly biased. Well if that is the case then every security software publisher out there has commissioned very specific tests with assessment firms such as MRG Effitas to show their product is better than the ones in that the security software publisher picks-and-chooses to be assessed against - thereby guaranteeing the end result that it wants - which is "their product is better than all others."
All tests should be approached with "I need to figure out what is being shown here. What it implies. And most importantly what it does not show or imply. And I need to not add words or intent to the test. Unless something is explicitly stated then I should assume nothing. There are an infinite number of ways I can interpret the test and its results. I should remove my own biases when viewing, interpreting, and reviewing the results."
The vast majority of people cannot do that. They bring their own personal junk and can't get past themselves when interpreting anything.
Last edited: