App Review An F Secure Safe follow up

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
5

509322

One cannot extrapolate the test results against 28 ransomware samples to include how the software will perform against all ransomware. It is a mistake to assume that if a software performs very well in a published test, that it will perform identically against all other malware, even of the same classification, over time.

The greatest limitation of security software testing - especially of those that are set by default to use signature\heuristic\behavioral detection - is that the test results are valid only for the samples used during testing.
 

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,400
One cannot extrapolate the test results against 28 ransomware samples to include how the software will perform against all ransomware. It is a mistake to assume that if a software performs very well in a published test, that it will perform identically against all other malware, even of the same classification, over time.

The greatest limitation of security software testing - especially of those that are set by default to use signature\heuristic\behavioral detection - is that the test results are valid only for the samples used during testing.
Although i admit what you mention is true, i wouldn't say that this kind of tests work only with the specific used malware samples. Lots of virus are coded with the same structure, or almost the same. According to the aim it tries to achieve, many behave the same way. So i would say, this tests only tells us how an antivirus solutions behaves against a specific malware and its coding variants.
 
5

509322

Although i admit what you mention is true, i wouldn't say that this kind of tests work only with the specific used malware samples. Lots of virus are coded with the same structure, or almost the same. According to the aim it tries to achieve, many behave the same way. So i would say, this tests only tells us how an antivirus solutions behaves against a specific malware and its coding variants.

Variants can be modified by various means such that they can bypass signature\heuristic\behavioral detections. A skilled malc0der will have no trouble doing so. Time and effort are all that are required.
 

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,400
Variants can be modified by various means such that they can bypass signature\heuristic\behavioral detections. A skilled malc0der will have no trouble doing so.
Indeed, very true. Actually, most signature-based solutions fail because of this. It's just that i, personally, wouldn't affirm that a test with 100 samples means only those files are able to be detected. Although many files can be tweaked to avoid detection by heuristic/signature database, some antimalware suites are able to detect a variant if it behaves the same way as others included on their malware database. Still, this isn't 100% accurate and fails most of the time, because sig-based solutions are dying slowly.
 

cruelsister

Level 42
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,148
My issue (among many others) with the testing sites is that they are giving some products the "100%" score. Obviously by this they mean that it stopped all of the samples used in the test- but so many will infer that this actually means that these products will detect 100% of ALL malware.

Those that are familiar with Symbolic Logic know what these Pros sites are doing- it's the old "argument from authority" (argumentum ad verecundiam) fallacy- they represent things that are Likely to be true and hope the reader will infer that the findings must be Necessarily true. In other words, they hope the reader will conclude as they are Professionals what they present must be the Word of God.

But continuing my post-"Extra Spicy, Please" chicken Vindaloo rant (should have know better requesting that from a restaurant named Shiva's Revenge), one can also see them opposite of this on sites like Wilders where private testing is suppressed. This is known as a a False Appeal to Authority (Courtier's Reply) where it is assumed that any argument made by someone who does not post credentials must be inherently invalid- but this misapplies the Argument from Authority fallacy as the lack of an official and relevant qualification doesn't automatically make the argument invalid.

Anyway, I'd rather have the Pro sites give a Good-Better-Best result and have them go on their way.
 
5

509322

My issue (among many others) with the testing sites is that they are giving some products the "100%" score. Obviously by this they mean that it stopped all of the samples used in the test- but so many will infer that this actually means that these products will detect 100% of ALL malware.

Those that are familiar with Symbolic Logic know what these Pros sites are doing- it's the old "argument from authority" (argumentum ad verecundiam) fallacy- they represent things that are Likely to be true and hope the reader will infer that the findings must be Necessarily true. In other words, they hope the reader will conclude as they are Professionals what they present must be the Word of God.

But continuing my post-"Extra Spicy, Please" chicken Vindaloo rant (should have know better requesting that from a restaurant named Shiva's Revenge), one can also see them opposite of this on sites like Wilders where private testing is suppressed. This is known as a a False Appeal to Authority (Courtier's Reply) where it is assumed that any argument made by someone who does not post credentials must be inherently invalid- but this misapplies the Argument from Authority fallacy as the lack of an official and relevant qualification doesn't automatically make the argument invalid.

Anyway, I'd rather have the Pro sites give a Good-Better-Best result and have them go on their way.

Unfortunately, test lab results are often mis-interpreted. On top of it, software publishers market the results as a generic validation of their software -- which is problematic on so very many levels. Every single 100 % performance score comes with caveats. Those caveats include a very broad range of exceptions and limitations - from capabilities to usability.

The fact of the matter is that testing is highly imperfect. Furthermore, the full range of testing problems is never explained in the test reports.

"This is what the test report states, but this is what it actually means." Two different things.

Readily available, easy-to-understand, comprehensive transparency is not one of the security software industry's strong points.

This is just my personal opinion.

There are many internal industry debates regarding such matters - with little agreement as to what is optimal. As with most things in life, cost - to a large extent - dictates how testing is performed and reported.
 
Last edited by a moderator:

jamescv7

Level 85
Verified
Honorary Member
Mar 15, 2011
13,070
I should agree for misinterpreted results.

In antivirus detection mechanism, not all samples will definitely protect at any kind of infection since it will come up from human lapses and errors for not formulating/maintaining effective detection.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top