My issue (among many others) with the testing sites is that they are giving some products the "100%" score. Obviously by this they mean that it stopped all of the samples used in the test- but so many will infer that this actually means that these products will detect 100% of ALL malware.
Those that are familiar with Symbolic Logic know what these Pros sites are doing- it's the old "argument from authority" (argumentum ad verecundiam) fallacy- they represent things that are Likely to be true and hope the reader will infer that the findings must be Necessarily true. In other words, they hope the reader will conclude as they are Professionals what they present must be the Word of God.
But continuing my post-"Extra Spicy, Please" chicken Vindaloo rant (should have know better requesting that from a restaurant named Shiva's Revenge), one can also see them opposite of this on sites like Wilders where private testing is suppressed. This is known as a a False Appeal to Authority (Courtier's Reply) where it is assumed that any argument made by someone who does not post credentials must be inherently invalid- but this misapplies the Argument from Authority fallacy as the lack of an official and relevant qualification doesn't automatically make the argument invalid.
Anyway, I'd rather have the Pro sites give a Good-Better-Best result and have them go on their way.
Unfortunately, test lab results are often mis-interpreted. On top of it, software publishers market the results as a generic validation of their software -- which is problematic on so very many levels. Every single 100 % performance score comes with caveats. Those caveats include a very broad range of exceptions and limitations - from capabilities to usability.
The fact of the matter is that testing is highly imperfect. Furthermore, the full range of testing problems is never explained in the test reports.
"This is what the test report states, but this is what it actually means." Two different things.
Readily available, easy-to-understand, comprehensive transparency is not one of the security software industry's strong points.
This is just my personal opinion.
There are many internal industry debates regarding such matters - with little agreement as to what is optimal. As with most things in life, cost - to a large extent - dictates how testing is performed and reported.