Wow a lot has happened since I posted last in this thread.
I won't lie, I am a little disappointed in Sophos in regards to how they are "marketing" Intercept X in SHP. I assumed like many of you here, that while it may not have all the features of Intercept X, it was still based on the the same code, implementation, etc... as their enterprise offering. I do hope that they keep their word and implement it properly, especially with the ML. IMO, I think having the ML in SHP would not only help their consumers, but also their enterprise customers. I am assuming that the more information its fed, the better it can become.
As for the MH testing, I am glad that
@Evjl's Rain is testing it and would like to commend him for doing so! Would I like a better result, sure... but we have to take the test with a grain of salt and understand what it means. The same goes for any other product tested on the HUB or else where for that matter.
I say this with the upmost respect for the great work the testers here do on the HUB, so please I mean no disrespect at all. We all have to realize that all tests HUB, AV-comparatives, AV-Test, etc... are all going to have their limitations and may not always truly represent true real world usage. I am no security expert and I honestly do not know how to perform malware testing, but from my years of reading, my understanding is that it is not always that simple. We could say that the HUB testing ignores the web components of the various products, thus eliminating a feature that may have other wise stopped the malware from coming on to the system in the first place, or it may have made no difference at all. As for the professional tests, one limitation that comes to my mind (thanks to cruesister
) is that they don't publish how old the samples are, thus are we getting a true representation of the products capabilities? Just because a product doesn't do well, doesn't mean it won't do its job in the real world. As good as a product may be there still has to be some form of education and common sense when it comes to surfing the web, reading emails, etc... You cannot always blame the product if it were to fail you. One issue I have with tests that say x product got 100%, (IMO) is that it creates bad habits/false assumptions. I am sure there are people out there that go, well I can download and click on anything and everything cause my security program will save me every time. No product can protect you 100% of the time
In the end, I think testing these products do help give us an idea of how they may perform, but realize that we shouldn't always be buying/using our products solely on these tests alone. There are many, many more reasons as to why we choose/use the products that we do. Otherwise, we might as well just stop using what we all have and switch to Kaspersky, as its seems to do the best in the vast majority of tests out here. Don't get me wrong, I like Kaspersky and personally, I feel like they deserve the great results, they offer great protection there's no auguring it, but its just an example of why the test result may not always be everything when choosing a product.