App Review Kaspersky vs Windows Defender

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
PC Security Channel

cruelsister

Level 42
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,144
It's heavy on some systems and isn't on others.
Perfectly said! The "lightness" of any AM product is directly proportional to the quality of the System hardware employed. everything is light on "The Best System Ever" and everything will be rather sluggish on a 10 year old computer that was a POS when new..

The best test (at least for me) of determining the actual resource requirement of a security app is to run it in a VM while varying the resource allocation to that VM. The results are normally quite apparent.
 

monkeylove

Level 11
Verified
Top Poster
Well-known
Mar 9, 2014
540
That's my situation! I'm using Win 10 on a 10-year-old i5 that I've been maintaining to the best of my abilities (16 GB RAM, SSD, optimized settings, and even an AOI liquid cooling system). It's as fast as I can make it, but I also use it as an Emby server, with multiple drives connected to it. So when I open large spreadsheets, browse folders with lots of content, and load many pages in Firefox then besides the benchmark scores I can also see how much slower these tasks are with WD. But if it also can't provide web protection for Firefox, then I'd rather use another AV that can do that plus is lighter.
 

simmerskool

Level 31
Verified
Top Poster
Well-known
Apr 16, 2017
2,094
Perfectly said! The "lightness" of any AM product is directly proportional to the quality of the System hardware employed. everything is light on "The Best System Ever" and everything will be rather sluggish on a 10 year old computer that was a POS when new..

The best test (at least for me) of determining the actual resource requirement of a security app is to run it in a VM while varying the resource allocation to that VM. The results are normally quite apparent.
yes fwiw, I have kaspersky standard running in vmware Guest win10 22H2, and F-Secure SAFE running in another identical Guest (I do not run both vm at the same time). I gleaned that kaspersky "locks down" more aspects of system, but is obviously heavier, while SAFE is relatively light. Both Guests have 16 gb RAM. cpu is aging some...
EDIT now I recall kaspersky blocked bitwarden from auto loading pw, it said to do that manually. (maybe there's a way to turn off that "feature" in K?)(I'm running vm with SAFE at the moment)
 
Last edited:

oldschool

Level 82
Verified
Top Poster
Well-known
Mar 29, 2018
7,102
From a post over at Wilders, I thought this the appropriate place to re-post this piece: The Holy Grail of AV Testing, and Why It Will Never Be Found
EK voices the issues I have with a lot of testing, especially YouTubers and other assorted 'Techxperts'. ;)

No offense to our beloved Hub testers as their tests use one of EK's preferred testing methods.😍
 
Last edited:
F

ForgottenSeer 97327

@oldschool very informative indeed, thanks for posting (y)
To win in these [on demand] tests you just suck up to the source of the malware used by the most famous testers (and these sources are well-known – VirusTotal, Jotti, and the malware-swappers in different AV companies), and then detect everything that all the others detect; that is, if a file is detected by competitors, to simply detect it using MD5 or something similar. No in-depth research and superior technologies to combat real life attacks are needed.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
From a post over at Wilders, I thought this the appropriate place to re-post this piece: The Holy Grail of AV Testing, and Why It Will Never Be Found

It is good to see this short article (written in 2011 by Eugene Kaspersky) - I read it several years ago. The crucial thing from it:

So how do you test quality of protection?

It stands to reason that it should be done in an environment that mirrors reality as closely as possible. The methodology of a good test must be based on the most common and widespread scenarios of what users face in real life. Everything else is incidental stuff that just doesn’t matter.
....
Besides the difficulties in getting the malware selection right for a test, there is the greater difficulty in obtaining conditions that are closest to reality, since they are extremely difficult to automate. That is, such tests demand a great deal of mechanical manual work.

How close to real life is a video in this thread ....? Like Earth from the stars. :)
https://malwaretips.com/threads/kaspersky-vs-windows-defender.119700/post-1017818
 

Shadowra

Level 33
Verified
Top Poster
Content Creator
Malware Tester
Well-known
Sep 2, 2021
2,295
I disagree. Not all av depend on the cloud, and it is proven. Bitdefender doesn't depend on cloud, atc works without cloud dependency, system watcher doesn't use cloud either, SONAR doesn't use cloud either, f-secure's deepguard doesn't use cloud either, they're all behavior blockers that don't depend on cloud. WD if you disable the cloud the protection drops to 60% and it catches almost no ransomware, it's av dependent cloud.
Greetings.

All antivirus products perform better with Cloud enabled than without.
That's why an offline test will be included in my tests from the end of January.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
The tests that are close to the idea from the article, are AV-Test and AV-Comparatives (Real-World tests). But even in these tests, the way of presenting the scorings can be misguiding for most users.
  1. In fact, the chart of a single test is an illusion, because the number of testing samples is insufficient.
  2. The scorings of all awarded AVs (in the same group) must be taken on an equal footing, despite the different scorings (usually when the difference of undetected samples is < 4 ). This information can be found in the testing methodology.
  3. When the AV missed 0 samples in June and 3 samples in August, it does not mean that the protection in the wild was different in June and August! For most AVs the protection rate is 1.5 missed samples per 700 samples, so the chances for 0 and 3 missed samples are almost the same.
The non-illusionary results can be obtained only after making many such tests, and then we can see that the differences are very small. For example:
https://malwaretips.com/threads/the-best-home-av-protection-2021.112213/post-973591
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
Here is an example of illusion (AV-Comparatives):

1673091767668.png


In fact, there is no indication that 13 best-scoring AVs could get a different protection in the wild, despite the different scorings.
From the testing methodology it follows that the AVs from the same cluster are undifferentiated in the single test.

1673091683398.png
 

Attachments

  • 1673091583084.png
    1673091583084.png
    115.8 KB · Views: 76

RansomwareRemediation

Level 4
Verified
Well-known
Jun 22, 2020
163
From a post over at Wilders, I thought this the appropriate place to re-post this piece: The Holy Grail of AV Testing, and Why It Will Never Be Found
EK voices the issues I have with a lot of testing, especially YouTubers and other assorted 'Techxperts'. ;)

No offense to our beloved Hub testers as their tests use one of EK's preferred testing methods.😍
I just read that report by Eugine Kaspersky, what he says is true, all-demand tests are poorly done, because cheating can be done. Maybe they will laugh at me :(, but I feel that the tests carried out by youtubers are the ones that most resemble reality and it is where you can see how an av works. Eugine does not mention those tests. Also mentions that the av should be dedicated to improving the quality of the product.
Greetings.
ps: even Eugine says it, to do a test you must install an av (by default), and try to execute malware in all possible ways, and youtubers do that. With this you can see the response of the av with all its active technologies. That's why I love tests like Shadowra for example.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
I admire the effort of the testers. Some tests (especially on MT) are interesting because they can show the weak points of the tested software and other aspects of the applied protection. (y)
But, one thing should be realized. The weak points do not translate to protection in the wild. We have similar examples in real life. Most people in the world could die from Ebola, but only a very small percentage do. This is a striking result compared to malaria with about 250 million cases per year (over 600 000 deaths).
The problem is when the AV vendor ignores the problem for too long. It seems that most AV vendors can react similarly to this problem because the differences in AV protection are not big.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
I'm not talking about the amount of malware files, but how youtubers run the test, which for me is the closest thing to reality in my opinion.

Usually, the tests only look like in real life, but they do not. One can find more info when reading the AMTSO methodology.

The test that can follow the idea of Eugene Kaspersky, requires visiting thousands of websites each day, which can be probably visited by users all around the world. When the suspicious file is found, it is immediately sent to the sandbox and analyzed. If malicious, then it is tested in a short time against the AVs. The test must be done quickly, to avoid "dead samples". In this way, a group of professionals can test about 10-20 samples in one day. Even then, the result of one test (2 months of testing) cannot differentiate between most of the tested AVs, so the tests must be repeated constantly. The statistically significant differences can be seen only after doing several tests.

So, if there were a few hundred YouTube tests in one month made by different guys all around the world, then you could put the results together and say that these tests could show something. Can anybody do it? :)

Unfortunately, even with these hundreds of tests, the result would be questionable, because many YouTube testers use samples from known sources, that are also known to AV vendors. So, the samples are not representative of the unknown samples that usually infect users in real life.

Post edited/corrected.
 
Last edited:

Shadowra

Level 33
Verified
Top Poster
Content Creator
Malware Tester
Well-known
Sep 2, 2021
2,295
To quickly debate this and explain how I do it:

For URLs, source that everyone knows. It's not the method I prefer because URLs are quickly detected and die very quickly... I do it on principle, but I hate it.

For my packs, I mix it up. I put old malwares (like Sality, Virut etc) to see if the lab knows them. Then, before getting fresh samples, I run them on a dedicated VM to see if they work. And once everything is OK, then I take it.
I also plan to create a personal Honneypot, but I lack time...

Then, for the YouTube tests, it depends.
On my side, I listen to all the opinions to improve my videos. Like at the end of January I'm going to integrate the Offline pack tests because an antivirus software won't react as if it's Online (hello Microsoft Defender because I'll start with it and before the end of January! )

For the other testers, you should know that I don't look at any of them. Faking a result is easy to do, especially if the tester is sponsored by a brand...
Whether it's Leo, CS Security etc, I don't look at any.
The only ones I look at are League Of Antivirus (but I think it has stopped) and ZeroTech00 (which I like a lot) but not others.
Especially since I'm the kind of person who believes what I do and not what I see :)

If you have any questions, don't hesitate of course !
 

RoboMan

Level 35
Verified
Top Poster
Content Creator
Well-known
Jun 24, 2016
2,400
There's only two facts I want to drop here:

1. It surprises me how little effort Microsoft is putting into the KNOWN and REPORTED GUI bugs. That's one of the reasons I can't yet like it. It's ridiculous how hard it is to remove threats, since the GUI tends to bug, misinform and crash. That's a basic AV function and cannot function the way it does.

2. I see a lot of comments where the discussion is "detection rates" and I strongly advice (mostly novice users) to NOT guide themselves with these. We must always remember this test and all tests should be taken with a grain of salt, which numbers only represent a general idea. For example, I've seen AV's with 100% detection rate, meaning all the malware files within THAT TEST were detected, but this in no way means this antivirus will always catch 100% of all existing malware. I've seen also Kaspersky score lower that competitors in "protection rates" which doesn't mean it's worse, since for example, a well configured Application Control in Kaspersky will never let malware execute in the first place (of course, it may happen some time, like the CCleaner case).

All in all, what I say is: do not let tests mean more than a general idea on what deserves or does not deserve to be tested by YOU when you feel like using it.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,119
I may have gotten "lost" in this thread?? so eg is AV-C or any others publishing a statistically significant list ranking of av for protection? (and system performance is always an issue)

Even the one-year summary is not conclusive. Here is the citation from AV-C:

There is no such thing as the perfect security program, or the best one for all needs and every user. Being recognized as “Product of the Year” does not mean that a program is the “best” in all cases and for everyone: it only means that its overall performance in our tests throughout the year was consistent and unbeaten.

The one-year summary of AV-C Real-World Protection tests includes the results of protection scorings combined with false alarm tests. Adding the results of two different categories is always questionable because there is no accepted method to do it. One could use the same results of R-W tests and false alarm tests and consider one of them as more meaningful - the results would be different (different winners). For example, the cumulative results presented in my post (for the year 2021) do not include false alarm tests (which can also be questionable). As can be seen, the winners are different compared to those presented by AV-C.
https://malwaretips.com/threads/the-best-home-av-protection-2021.112213/post-973591

Edit.
The results of false alarm tests can be interpreted differently compared to AV-C. The way used by AV-C overestimates the false alarms of very rare software. The false alarm of software used by 1000 people all around the world is counted in the same way as another one used by hundred of thousand of users.
 
Last edited:

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top