App Review Do not choice Panda ! (Panda Dome Free vs Panda Dome Complete)

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
Shadowra

Stopspying

Level 19
Verified
Top Poster
Well-known
Jan 21, 2018
814
So are you implying that Panda was drunk when @Shadowra tested it? Now we know why it did badly in the test.
Brilliant!

No, in all honesty I was not implying that the Panda AV was drunk, but the concept of software being 'under the influence' of behaviour altering substances is one that malware fighters are unlikely to want spread on the internet!
 
F

ForgottenSeer 97327

and in the AV-C July - Oct 2022 real world protection test panda got 2 stars blocked 99.7% with 36 false+. :unsure: I guess that's one reason MT (@Shadowra), @cruelsister, and others do REAL real protection tests. I need to read the fine print, does AV-C (& others) have disclaimers so they don't get sued by infected viewers? Or something like "for entertainment purposes only"
Well, I made a joke about feeding the Panda one sample a year, but that could be the reason why Panda performs better in real world tests of AV-test labs (I have read somewhere that most of the AV-Labs use fresh samples, but launch malware samples in 15 minutes windows simultaneously for all AV's to prevent one AV learning from other sharing samples and VT-detections).

Machine Learning/Artificial Intelligence is the next level of static (pre-execution) heuristics, only AI/ML uses many more data points and determines the probability based on the distance to earlier bad/good sample value clusters (while heuristics only use a few data points with at best some rules based reasoning). This makes ML/AI a huge improvement over traditional heuristics. Behavioral blockers are often seen as the next level HIPS (which is not true because HIPS denied strange, out of bound, behavior, while BB's allow behavior until an actor has accumulated so many warnings, it is blocked, so allow by default).

Early BB's managed their own data acquisition and monitoring until Windows started to prevent or virtualize critical system components and the BB's started to use windows own event system for collecting unusual behavior., This has immense advantages (less overhead while obtaining more data), but at the cost of some loss of cause-effect information. Because the Windows OS became more robust, malware started to use smarter and more staged ways to intrude the system (e.g. social engineering, obfuscation, LolBin, script, boot persistency, worm, outbound access, dropper, exectable).

Due to the staged intrusion and insufficient cause-effect information, a behavior blocker will have a hard time recognizing the correlation between the different stages over time. The Behavior Blocker pattern recognition capabilities will decrease signifcantly when the tester launches mutiple malwares within a short period of time. All the event signals triggered (with insufficient cause-effect info) may overwhelm the BB because it does not know how to deal with so many deviations on normal behavior. The event-sequence-paths get disturbed, so the BB simply does not recognize the event-path-patterns which are typical for some malware (it becomes autistic due to the many event triggers).

I only know Kaspersky System Watcher as the raven with the white feathers, Kaspersky's System Watcher (behavorial blocker) somehow manages to keep track of the event-sequences and popup on the right moment to block malware from infecting your system, but Panda's and Webroot's behavioral blockers are obvious not capable of handling several malware intrusions in a short period of time (as clearly shown by Shadowra's videos).
 
Last edited by a moderator:

bellgamin

Level 4
Verified
Well-known
Oct 11, 2016
160
@Shadowra -- Love your test! 😍 Detest what you found. 😖

I have dumped Panda accordingly (Sob) Ergo, release the hounds! I'm on the hunt for a replacement AV.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

THIS MT test-report thread by OP @harlan4096 helped me remember that, quite a long while ago, I happily ran DR Web AV for 3-4 years & loved it. Ergo, my next AV trial runs will include Dr Web Security Space. BTW, the trial version was a 509MB download. Oh well--- 🥱
 

Divine_Barakah

Level 29
Verified
Top Poster
Well-known
May 10, 2019
1,854
@Shadowra -- Love your test! 😍 Detest what you found. 😖

I have dumped Panda accordingly (Sob) Ergo, release the hounds! I'm on the hunt for a replacement AV.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

THIS MT test-report thread by OP @harlan4096 helped me remember that, quite a long while ago, I happily ran DR Web AV for 3-4 years & loved it. Ergo, my next AV trial runs will include Dr Web Security Space. BTW, the trial version was a 509MB download. Oh well--- 🥱
I have recently tried Dr Web, it felt heavy and its scans take ages to finish.
 

electroplate

New Member
Apr 7, 2022
8
Yes panda did badly . it also detected many .the problem always seems to be that cloud based av has a limit to how many infections it can soak up at once without stalling . a real user would not be hit with as many at once . some have never been hit at all. so we have the theoretical test usage of samples known/unknown but how many should be used to prove efficacy. AV-C uses many over time . AVLabs Poland seems to use less but newer or file-less examples . if it is a matter of quantity of knowns versus hostile unknowns how can we prove anything ? i'm having trouble understanding how Panda gets high score with AV-C if it is so proveably poor in other tests. webroot finds few friends here but is proven by AVLabs Poland . if all the vendors are paying for their results and the methodology is clearly published by AV-C etc who am i to trust with my money. If wisevector stop x is so effective ( i use it ) and webroot techniques are so trusted by business and proven by AVLabs Poland how can traditional AV survive ? apologies if i'm being naive or obvious but it comes down to this for me . my risk factor is very low . maybe panda or webroot would be fine to catch the odd nasty on websites . But smartscreen seems to get the Amtso test files first any way. to say that panda or webroot or any other cannot protect users i think is harsh . we could look at protection/detection percentages in risk bands as well as bare numbers . it's all very contradictory to have any test pass when another will fail and thus to a certain extent testing has become an academic exercise . i trust all testers who are seeking the truth. maybe there is no empirical proveable truth in the malware game . just food for thought that MT and Wilders gives me daily!! have health and happiness in 2023 everyone . cheers!
 

Divine_Barakah

Level 29
Verified
Top Poster
Well-known
May 10, 2019
1,854
Yes panda did badly . it also detected many .the problem always seems to be that cloud based av has a limit to how many infections it can soak up at once without stalling . a real user would not be hit with as many at once . some have never been hit at all. so we have the theoretical test usage of samples known/unknown but how many should be used to prove efficacy. AV-C uses many over time . AVLabs Poland seems to use less but newer or file-less examples . if it is a matter of quantity of knowns versus hostile unknowns how can we prove anything ? i'm having trouble understanding how Panda gets high score with AV-C if it is so proveably poor in other tests. webroot finds few friends here but is proven by AVLabs Poland . if all the vendors are paying for their results and the methodology is clearly published by AV-C etc who am i to trust with my money. If wisevector stop x is so effective ( i use it ) and webroot techniques are so trusted by business and proven by AVLabs Poland how can traditional AV survive ? apologies if i'm being naive or obvious but it comes down to this for me . my risk factor is very low . maybe panda or webroot would be fine to catch the odd nasty on websites . But smartscreen seems to get the Amtso test files first any way. to say that panda or webroot or any other cannot protect users i think is harsh . we could look at protection/detection percentages in risk bands as well as bare numbers . it's all very contradictory to have any test pass when another will fail and thus to a certain extent testing has become an academic exercise . i trust all testers who are seeking the truth. maybe there is no empirical proveable truth in the malware game . just food for thought that MT and Wilders gives me daily!! have health and happiness in 2023 everyone . cheers!
But as for cloud protection, I think security products tend to cache the data base, I do not thing they inquire everytime a malware is run. I might be wrong though.
 

Shadowra

Level 33
Thread author
Verified
Top Poster
Content Creator
Malware Tester
Well-known
Sep 2, 2021
2,246
Yes panda did badly . it also detected many .the problem always seems to be that cloud based av has a limit to how many infections it can soak up at once without stalling . a real user would not be hit with as many at once . some have never been hit at all. so we have the theoretical test usage of samples known/unknown but how many should be used to prove efficacy. AV-C uses many over time . AVLabs Poland seems to use less but newer or file-less examples . if it is a matter of quantity of knowns versus hostile unknowns how can we prove anything ? i'm having trouble understanding how Panda gets high score with AV-C if it is so proveably poor in other tests. webroot finds few friends here but is proven by AVLabs Poland . if all the vendors are paying for their results and the methodology is clearly published by AV-C etc who am i to trust with my money. If wisevector stop x is so effective ( i use it ) and webroot techniques are so trusted by business and proven by AVLabs Poland how can traditional AV survive ? apologies if i'm being naive or obvious but it comes down to this for me . my risk factor is very low . maybe panda or webroot would be fine to catch the odd nasty on websites . But smartscreen seems to get the Amtso test files first any way. to say that panda or webroot or any other cannot protect users i think is harsh . we could look at protection/detection percentages in risk bands as well as bare numbers . it's all very contradictory to have any test pass when another will fail and thus to a certain extent testing has become an academic exercise . i trust all testers who are seeking the truth. maybe there is no empirical proveable truth in the malware game . just food for thought that MT and Wilders gives me daily!! have health and happiness in 2023 everyone . cheers!

Hello and welcome :)

Yes I'm hard on Panda but do you understand that it's well deserved.
Panda always claims a "better protection", uses false tests (many know what I think about AV-C) and so on....
Personally I do my tests with current threats and of the day (only the worms are dated to see the reactivity of the laboratory)
And Panda unfortunately lacks behavioral protection and especially reactivity in R&D.
It's such a pity that if I compare it to its competitor Trendmicro which is also a 100% Cloud antivirus, it manages to protect because they have for me one of the best Behavior Blocker....
It hurts me to see a potential wasted because Panda does not wake up.....
 
F

ForgottenSeer 97327

Shadowra said:
And Panda unfortunately lacks behavioral protection.
No, Panda has behavioral protection
1672935377331.png
Due to cloud lookup delay and behavioral protection, Panda has two disadvantages when you launch a lot of malware. @electroplate makes a valid point, launching so many malware is not a realistic use case. That said, I started joking about Panda's inability to protect in security-enthousisast test, because I prefer my (freeware) AntiVirus to perform good in both types of tests (AV-testlabs and security-enthousiasts).
 
Last edited by a moderator:

Mahesh Sudula

Level 17
Verified
Top Poster
Well-known
Sep 3, 2017
818
Hello and welcome :)

Yes I'm hard on Panda but do you understand that it's well deserved.
Panda always claims a "better protection", uses false tests (many know what I think about AV-C) and so on....
Personally I do my tests with current threats and of the day (only the worms are dated to see the reactivity of the laboratory)
And Panda unfortunately lacks behavioral protection and especially reactivity in R&D.
It's such a pity that if I compare it to its competitor Trendmicro which is also a 100% Cloud antivirus, it manages to protect because they have for me one of the best Behavior Blocker....
It hurts me to see a potential wasted because Panda does not wake up.....
There are many other newbie AV products in the market, that well deserve testing and feedback to their vendors.

And some others like Panda, it is just a waste of time, mental energy and headache. Neither they improve their product nor consider our feedback seriously.

Avira in the past "used to be " more or less PANDA V2, but they somehow managed to stay in the race by heavily concentrating ,improving and relying on the signatures part, in fact they are the BEST in that area even now. Atleast some efforts were kept to keep the product ok ok standards.

Lately thanks to Bullgaurd acquisition followed by Norton Lifelock, avira integrated BG Sentry as their core BB which seems to "atleast" work.

Either improve the product by self and take the feedback seriously, nor integrate other company product modules to yours, else close the shop - Panda.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top