We still participate in some tests, like VB100 as well as AvLab. Other than that, we don't require a third-party testing lab, testing with a handful (compared to what is out there every single day) of samples per month to know how good or bad we are compared to other companies. We literally perform thousands of small scale tests every single day to make sure we stay on top of things. We have feedback loops and metrics from customers to see if they get hit and with what as well. So participation in AV tests has never been about "quality checks" as it is completely unfit to replace QA in-house when it comes to ensuring protection quality, to begin with.
The truth is this: AV tests are marketing tools. Nothing else. Picking 100 samples per month out of a pool of tens of millions will tell you absolutely nothing about the effectiveness of an AV. Quite frankly: The malware hub here in this forum probably tests with more samples than AV-Test or AV-C do every month for their "real world protection" tests.
The other aspect is, that we never really fit into these tests, to begin with. For example, we constantly got penalized in the form of false positives due to how our surf protection works, as we don't want to compromise our user's privacy and security and therefore used DNS filtering instead of intercepting encrypted traffic. So if a domain was also hosting other stuff besides malware, that was counted as an FP for us as we didn't block only the malicious download but all downloads.
Of course, we could have just thrown our commitment to our users' privacy overboard and just do like everyone else does: Break open every single one of your connections, probably screwing up TLS in the process and then ultimately ending up leaving your system open to more severe threats than most malware pose.
So ultimately we had to ask ourselves: Do we want to build software that protects our user's from actual real-life threats or do we want to build a product that solely suites AV testers.
It's an open secret that most AV vendors who consistently do well in these tests have dedicated teams that try to optimise their product's specifically to get a great score. Whether that is trying to figure out the threat intel feeds that the testing companies use, to obtain access to them as well and therefore know the samples in advance, or more shady methods like outright adding mechanisms to their products that attempt to recognise when their product is being "tested" so they could then cheat (think "Dieselgate" but for AVs).
You may think, that great performance in these tests would automatically translate into great real-life performance, but that simply isn't true. Just visit any of the big tech support communities that provide help with malware infections and check out the products these users had installed when they got infected. You will quickly notice there, that a majority of those products that get a perfect score in tests on a regular basis, don't do that well in the real world.
We, therefore, decided to just stop participating in the big tests and stop playing these kinds of games. It's not like positive or negative results had any measurable impact on our revenue and sales anyway. We are better off putting that money into more developers (yes: plural) and focus on what we are arguably best at: Keeping our users safe.