the results only apply to the samples used in the test and are not indicative of how they'd perform against malware you could potentially run into on the web.
The results do typically contain notices that the performance (such as detection) is exclusive to the test. If you check the reports for AV-C you'll see their notices stating that products that detect 100% do not actually detect 100% of malicious software, but only 100% of the samples used in the test.
As well as this, the samples used in the tests will be malware you could potentially run into on the web. They don't make a few hundred to several thousand malware samples themselves, they use real-world samples. Whether those samples are targeted to home users, businesses or both is irrelevant... Malicious software is malicious software and the products are tested with malicious software to see how well they perform during the tests.
Same should apply to publications who report on said results and those who do their own testing like Youtubers, PCMag, etc.
YouTube tests are more opinionated than factual because a lot of the time the samples are not actually verified to be malicious. A few detection's on VirusTotal or a high score on Hybrid-Analysis doesn't verify it; I've seen clean software hit the very high marks for VirusTotal due to generic detection's causing false positives or on Hybrid-Analysis (H-A/other sandboxing reports need to be checked by researchers with experience to understand what malicious is and isn't benign).
The tests conducted by the testing companies such as VirusBulletin, AV-Comparatives, AV-TEST and many others are based on facts. A vendor flagged X amount of samples or it didn't for on-demand scanning and in real-time, as well as URL flagging checks. It isn't unknown that the results from tests are not representations of how a product will perform all the time, but typically speaking vendors that usually come top in such tests like Bitdefender and Avira do typically have amazing signatures and hence this is why many other vendors such as Emsisoft, F-Secure, Qihoo and IObit rely on such SDKs provided by those vendors for dual/triple (or more) engine combinations.
Vendors have good and bad days, testers at these companies do testing as a career and work at said companies because they have experience, and review articles are based on facts established from the tests (and usually opinions presented with them). Banning reports of testing labs would be really silly and a useless censorship. If people don't agree with them or trust the testing lab then just don't read the reports or care for them, there's no need to ban them...
There's even rumours that a vendor pays more to get better results. Not true unless the testing lab is unethical and don't know what they are doing. I'm pretty sure vendors pay the same amount of money to have their products results published as the other vendors conducted in the same tests and also having their results published for it.
At the end of the day, attacks are doubling/tripling each year and thousands new samples arise each week so a product detecting full or nothing isn't a representation of how it will always perform. Sure an Average Joe who doesn't know anything might not see the disclaimers about results but that is down to them to read reports properly before assuming. None of the professional popular labs state "A product always has 100% detection" and what-not. Vendors do marketing tricks sometimes like "99% detection" if they have a proper award from a test where they scored full but product providers for non-technology services use their various awards and feedback for all sorts of other marketing tricks so this stuff is nothing new.
Each to their own but that is just what I think. I do see where you're coming from though