Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Video Reviews - Security and Privacy
Do not choice Panda ! (Panda Dome Free vs Panda Dome Complete)
Message
<blockquote data-quote="electroplate" data-source="post: 1018433" data-attributes="member: 94804"><p>Yes panda did badly . it also detected many .the problem always seems to be that cloud based av has a limit to how many infections it can soak up at once without stalling . a real user would not be hit with as many at once . some have never been hit at all. so we have the theoretical test usage of samples known/unknown but how many should be used to prove efficacy. AV-C uses many over time . AVLabs Poland seems to use less but newer or file-less examples . if it is a matter of quantity of knowns versus hostile unknowns how can we prove anything ? i'm having trouble understanding how Panda gets high score with AV-C if it is so proveably poor in other tests. webroot finds few friends here but is proven by AVLabs Poland . if all the vendors are paying for their results and the methodology is clearly published by AV-C etc who am i to trust with my money. If wisevector stop x is so effective ( i use it ) and webroot techniques are so trusted by business and proven by AVLabs Poland how can traditional AV survive ? apologies if i'm being naive or obvious but it comes down to this for me . my risk factor is very low . maybe panda or webroot would be fine to catch the odd nasty on websites . But smartscreen seems to get the Amtso test files first any way. to say that panda or webroot or any other cannot protect users i think is harsh . we could look at protection/detection percentages in risk bands as well as bare numbers . it's all very contradictory to have any test pass when another will fail and thus to a certain extent testing has become an academic exercise . i trust all testers who are seeking the truth. maybe there is no empirical proveable truth in the malware game . just food for thought that MT and Wilders gives me daily!! have health and happiness in 2023 everyone . cheers!</p></blockquote><p></p>
[QUOTE="electroplate, post: 1018433, member: 94804"] Yes panda did badly . it also detected many .the problem always seems to be that cloud based av has a limit to how many infections it can soak up at once without stalling . a real user would not be hit with as many at once . some have never been hit at all. so we have the theoretical test usage of samples known/unknown but how many should be used to prove efficacy. AV-C uses many over time . AVLabs Poland seems to use less but newer or file-less examples . if it is a matter of quantity of knowns versus hostile unknowns how can we prove anything ? i'm having trouble understanding how Panda gets high score with AV-C if it is so proveably poor in other tests. webroot finds few friends here but is proven by AVLabs Poland . if all the vendors are paying for their results and the methodology is clearly published by AV-C etc who am i to trust with my money. If wisevector stop x is so effective ( i use it ) and webroot techniques are so trusted by business and proven by AVLabs Poland how can traditional AV survive ? apologies if i'm being naive or obvious but it comes down to this for me . my risk factor is very low . maybe panda or webroot would be fine to catch the odd nasty on websites . But smartscreen seems to get the Amtso test files first any way. to say that panda or webroot or any other cannot protect users i think is harsh . we could look at protection/detection percentages in risk bands as well as bare numbers . it's all very contradictory to have any test pass when another will fail and thus to a certain extent testing has become an academic exercise . i trust all testers who are seeking the truth. maybe there is no empirical proveable truth in the malware game . just food for thought that MT and Wilders gives me daily!! have health and happiness in 2023 everyone . cheers! [/QUOTE]
Insert quotes…
Verification
Post reply
Top