cruelsister
Level 43
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Forum Veteran
These applications get 100% on test vs Ransomware:
in the past people believed earth was flat any one who opposed were killed....this is the new version of the same old vine in a new bottle
Although i admit what you mention is true, i wouldn't say that this kind of tests work only with the specific used malware samples. Lots of virus are coded with the same structure, or almost the same. According to the aim it tries to achieve, many behave the same way. So i would say, this tests only tells us how an antivirus solutions behaves against a specific malware and its coding variants.One cannot extrapolate the test results against 28 ransomware samples to include how the software will perform against all ransomware. It is a mistake to assume that if a software performs very well in a published test, that it will perform identically against all other malware, even of the same classification, over time.
The greatest limitation of security software testing - especially of those that are set by default to use signature\heuristic\behavioral detection - is that the test results are valid only for the samples used during testing.
Although i admit what you mention is true, i wouldn't say that this kind of tests work only with the specific used malware samples. Lots of virus are coded with the same structure, or almost the same. According to the aim it tries to achieve, many behave the same way. So i would say, this tests only tells us how an antivirus solutions behaves against a specific malware and its coding variants.
Indeed, very true. Actually, most signature-based solutions fail because of this. It's just that i, personally, wouldn't affirm that a test with 100 samples means only those files are able to be detected. Although many files can be tweaked to avoid detection by heuristic/signature database, some antimalware suites are able to detect a variant if it behaves the same way as others included on their malware database. Still, this isn't 100% accurate and fails most of the time, because sig-based solutions are dying slowly.Variants can be modified by various means such that they can bypass signature\heuristic\behavioral detections. A skilled malc0der will have no trouble doing so.
My issue (among many others) with the testing sites is that they are giving some products the "100%" score. Obviously by this they mean that it stopped all of the samples used in the test- but so many will infer that this actually means that these products will detect 100% of ALL malware.
Those that are familiar with Symbolic Logic know what these Pros sites are doing- it's the old "argument from authority" (argumentum ad verecundiam) fallacy- they represent things that are Likely to be true and hope the reader will infer that the findings must be Necessarily true. In other words, they hope the reader will conclude as they are Professionals what they present must be the Word of God.
But continuing my post-"Extra Spicy, Please" chicken Vindaloo rant (should have know better requesting that from a restaurant named Shiva's Revenge), one can also see them opposite of this on sites like Wilders where private testing is suppressed. This is known as a a False Appeal to Authority (Courtier's Reply) where it is assumed that any argument made by someone who does not post credentials must be inherently invalid- but this misapplies the Argument from Authority fallacy as the lack of an official and relevant qualification doesn't automatically make the argument invalid.
Anyway, I'd rather have the Pro sites give a Good-Better-Best result and have them go on their way.