App Review Windows Defender Firewall Critique Part 2

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
Ophelia

bazang

Level 6
Jul 3, 2024
298
Precisely. All tests have to be taken with a grain of salt.
All tests are specific and\or contrived whether performed by a security software test lab, a researcher, or an enthusiast. As long as the premise of the test is sound, then the test itself is valid.

Insurance companies cover a lot of things in their policies that organizations and individuals will never suffer a financial loss from, but the insurance company keeps including them and charging the policy owners for that very-unlikely-to-happen coverage. It is 100% profit for the insurance company. The insurance underwriters argument is "If it can happen, then it should be covered [and you should have to pay for that coverage whether or not you want it].)

This is how any security software and testing of it should be viewed. If something is within the realm of possibility, then a test showing that potentiality is valid. It matters not if one considers it "real-world" or not. Software publishers use the "Not real-world" argument to 1) dismiss or discredit a test result and 2) as the justification for not fixing demonstrated weaknesses or vulnerabilities.

Lots of security software do very well in "real-world" tests and yet they fail on a daily basis in the real-world incidents against "real-world-in-the-wild" threats. Those products that are widely acclaimed among the user base as "The best-of-the-best-of-the-best, Sir!" fail to protect hundreds of thousands of systems. And then there are those millions of users who never get infected because - regardless of the system configuration - they do not do the things that get others infected. That truth does not in any way discredit nor diminish any sound test results. Tests are assessments of "What ifs, corner cases, and abstractions of stupid human behaviors, misconfigurations, weaknesses & vulnerabilities."

The categorization of any tests as "real-world" is actually a misnomer. Because all security tests are fabricated or contrived, no matter who does them nor the underlying protocols or methods. AV test lab methodology is only an approximation of what a typical security software user would experience against the average security threat.

The term "real-world" and "360 Assessments" as a "test methodology" or suite of tests was done to quash complaints by security software publishers that the testing was not showing their product features in the best light. The babies cried "Foul! Not fair! Not fair!" So labs came up with jingaling-jingaling marketing labels for their tests. This made their sensitive clients happy because it provided tests named and designed to provide the "proof" that they are quality security software where the publisher can state "You are protected." It's 100% marketing driven - and Microsoft itself is mostly responsible for why this kind of testing and marketing exists.

Security software developers design their products around a set of features they believe to be the best way to protect against threats. Any test that does not show off these features to the publisher's satisfaction, that publisher will consider "invalid" and do everything they can to discredit the test results. Or, no matter what, the publisher - when it comes down to it - will place the blame on the user with the predictable arguments "The user did something that is not covered by the product, the user did not understand the product, the user misconfigured the product, the user selected "Allow," users do not look at advanced settings to increase protections to cover this case, etc."

Unfortunately, all the test labs have caved-in to these publisher complaints and created test protocols that are acceptable to the security software publishers - who pay the lab money. Any entity that derives its living from "Clients" is going to cater to those clients in order to keep those clients happy and the revenue inflows going. This is not to say that the testing is not well designed, fundamentally compromised by "profit before accurate test results" or similar. It just means AV Labs are not going to perform any tests that will bypass every single security product. They will not assess the products in a way that goes beyond security software publisher accepted "vanilla" testing. Anything outside of that, those publishers will cry "Invalid!"

The best, most accurate testing are independent enthusiasts that find ways to bypass specific security features. This is where you get a clear and honest demonstration that, if you understand the demonstration, that realize "What the security software publisher says just ain't true or it is not entirely true." For those that have greater insight they realize that a test is specificity. It might even be purpose-built to show weakness in one software and that the weakness does not exist in the other. That demonstration does not invalidate what is being demonstrated.

Google's Project Zero operates on this basis. Tavis Ormandy has been notorious for ignoring security software publisher and security software enthusiast complaints that his findings are not valid. His reply to any detractors has always been: "F*** O**. The test results are accurate and what I am saying is the truth.

It is unfortunate, but there are those who automatically assume that a person's preference for one security software over any other automatically makes their demonstrations nefarious or wrongly biased. Well if that is the case then every security software publisher out there has commissioned very specific tests with assessment firms such as MRG Effitas to show their product is better than the ones in that the security software publisher picks-and-chooses to be assessed against - thereby guaranteeing the end result that it wants - which is "their product is better than all others."

All tests should be approached with "I need to figure out what is being shown here. What it implies. And most importantly what it does not show or imply. And I need to not add words or intent to the test. Unless something is explicitly stated then I should assume nothing. There are an infinite number of ways I can interpret the test and its results. I should remove my own biases when viewing, interpreting, and reviewing the results."

The vast majority of people cannot do that. They bring their own personal junk and can't get past themselves when interpreting anything.
 
Last edited:

oldschool

Level 85
Verified
Top Poster
Well-known
Mar 29, 2018
7,606
All tests should be approached with "I need to figure out what is being shown here. What it implies. And most importantly what it does not show or imply. And I need to not add words or intent to the test. Unless something is explicitly stated then I should assume nothing. There are an infinite number of ways I can interpret the test and its results. I should remove my own biases when viewing, interpreting, and reviewing the results."
Thus, my comment. "All tests have to be taken with a grain of salt."
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
All tests are specific and\or contrived whether performed by a security software test lab, a researcher, or an enthusiast. As long as the premise of the test is sound, then the test itself is valid.
Unfortunately, there is no way to check if the test's premise is sound. You have to believe the tester's authority. After reading the documentation of AMTSO (AMTSO - Anti-Malware Testing Standards Organization - AMTSO), I am ready to believe them. Making a test with AMTSO standards is extremely hard. I can also believe some well-known researchers. I have no reason to believe most enthusiasts (with a few exceptions).

Lots of security software do very well in "real-world" tests and yet they fail on a daily basis in the real-world incidents against "real-world-in-the-wild" threats.

That is normal. If you take the results of real-world tests of leading AV testing labs, the top AVs miss on average about 1 per 500 samples. There are more than 400000 samples per day in the wild, so we have many unhappy people daily.

The babies cried "Foul! Not fair! Not fair!" So labs came up with jingaling-jingaling marketing labels for their tests.

I think that the labels are sensible. The samples from the "Real-World" and "Malware Protection" tests are very different. The first category includes very fresh web-based samples. The second category includes the older samples originating from non-web sources. The average infection rate of the fresh web-based samples is several times higher. The web-based samples are mainly morphed samples that live short in the wild. The non-web malware can be stored (alive) for weeks.

For example:
Avast "Real-World" infection rate: 10/6748 ~ 0.0015
Avast "Malware Protection" infection rate: 15/270634 ~ 0.000055

This is not to say that the testing is not well designed, fundamentally compromised by "profit before accurate test results" or similar. It just means AV Labs are not going to perform any tests that will bypass every single security product. They will not assess the products in a way that goes beyond security software publisher accepted "vanilla" testing. Anything outside of that, those publishers will cry "Invalid!"
I am unsure if this is true, but one cannot exclude the possibility that the influence of AV vendors on AV testing labs can make the tests somewhat biased.
Anyway, in the end, the cumulative results of those tests are probably close to truth:

Real-World 2021-2022: SE Labs, AV-Comparatives, AV-Test (7548 samples in 24 tests)
Norton 360, Avast, Kaspersky ............12 - 18 missed samples <------- top AVs
Microsoft, McAfee ..............................27 - 37 missed samples <------- very good AVs


Google's Project Zero operates on this basis.
Yes, this project and some others (Black Hat, Bug Bounty, etc.) are valuable.

It is unfortunate, but there are those who automatically assume that a person's preference for one security software over any other automatically makes their demonstrations nefarious or wrongly biased.
Yes, it is unfortunate and irrational.

Unless something is explicitly stated then I should assume nothing. There are an infinite number of ways I can interpret the test and its results. I should remove my own biases when viewing, interpreting, and reviewing the results."
Well said. :)(y)
 

bazang

Level 6
Jul 3, 2024
298
I have no reason to believe most enthusiasts (with a few exceptions).
Most enthusiasts that want to be taken seriously do the things necessary in their testing to build a solid reputation. They are easily identified and differentiated from the typical YouTube tester.

I did not mean any random YouTube tester.

I am unsure if this is true, but one cannot exclude the possibility that the influence of AV vendors on AV testing labs can make the tests somewhat biased.
Those being assessed always have influence upon the assessors and the methodology used to assess, particularly when the relationship between the assessed and the assessor is subject to multi-level conflicts of interest, collusion, a common motive or objective, a pay-for-assessment system, gaming the assessment system (cheating), etc.

It is probably a good thing to be skeptical in such arrangements, even when every bit of infos available to you points to all of it being performed and completed in a trustworthy manner.

Well said. :)(y)
Lord Burghley (William Cecil) once said to someone he wanted to degrade and grievously insult:

"If it were not for you being you, you would be a much better person. You are powerless to help yourself. A curse upon your family and our society. You could have been great." (I wonder what that sounded like in the original Old English. The tone and inflection of voice he used.)

When he said it he knew it applies to us all in one way, shape or form. Himself excluded, of course. Even though he knew it to be particularly apt to himself. He was great by the mere fact that he considered himself to be one of the greatest in English history. Such it is for those that dictate the rules of Empire and their place in it. They even get to fabricate their own pleasant fictions about themselves.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
Most enthusiasts that want to be taken seriously do the things necessary in their testing to build a solid reputation. They are easily identified and differentiated from the typical YouTube.

If we exclude the tests made by MT members, are there examples of easily identified enthusiasts accepted by you? I have a problem with such examples.

Those being assessed always have influence upon the assessors and the methodology used to assess, particularly when the relationship between the assessed and the assessor is subject to multi-level conflicts of interest, collusion, a common motive or objective, a pay-for-assessment system, gaming the assessment system (cheating), etc.

It is possible, but I did not discover a significant influence in the typical professional tests. Furthermore, the tests done by enthusiasts and researchers do not contradict the results of professional tests. Did you encounter any contradictions?
 

bazang

Level 6
Jul 3, 2024
298
If we exclude the tests made by MT members, are there examples of easily identified enthusiasts accepted by you? I have a problem with such examples.
Most people I know who are not professional researchers or security testers that test do not make videos. They make demos at conferences such as Wild West Hackin' Fest or Black Hat. They are self taught and take their "hobby" very seriously. They're not creating YouTube videos for likes or to be influencers (not that there is actually anything wrong with either, intrinsically).

It is possible, but I did not discover a significant influence in the typical professional tests.
All the tests performed by AV Comparatives, AV Test, etc - they are all to the dictates of what the security software publishers as an industry group find acceptable. That industry group has great influence as they are the source of all AV lab revenue.

If you are a business and you do things that your clients do not find acceptable, then you will not be in business for very long.

Furthermore, the tests done by enthusiasts and researchers do not contradict the results of professional tests. Did you encounter any contradictions?
It depends upon what you define as a "contradiction."

Researchers and enthusiasts expose are all the corner cases and things not covered by the dedicated professional test labs. Your video showing how the publicly available infos to stop services can be weaponized against security software is a prime example. I know you did more testing than you published. You were able to bork other security software using the method, but you chose not to publish the results because you did not want to deal with MT drama from certain people.

Researchers definitely test differently than test labs and their objective is not to perform "general malware testing" like the AV test labs. They are motivated to find unique problems, problems in areas where others did not think to look, etc.

If you were given a budget of $20 million USD and instructed to hire researchers and enthusiast pentesters\security software testers to put all the leading security software through various multi-level rounds of thorough testing, after you did that and posted the test results here the readers would have a much different, more cautious, less trusting attitude towards their favorite security software.

Knowing you, your counter-argument is going to immediately go to "Home users are not targeted so that kind of testing is not required and the home users need not worry." While it may be true they are not targeted, that is not the point. The point is that there are lots of ways to put holes in security software and it is not the test labs finding them. It is researchers and enthusiasts that do it.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
Researchers and enthusiasts expose are all the corner cases and things not covered by the dedicated professional test labs.
(...)
Knowing you, your counter-argument is going to immediately go to "Home users are not targeted so that kind of testing is not required and the home users need not worry."

OK, it seems that I misunderstood your meaning of enthusiasts with common YouTube testers.
My argument is slightly different. As you noticed, the tests made by professional labs are usually very different from tests made by researchers and enthusiasts. Home users should not panic after reading/watching tests made by researchers and enthusiasts, because the vulnerabilities exposed, have a very small impact on the infection rate at home. This is possible because Microsoft and most AV vendors are not blind to exposed vulnerabilities. Furthermore, some vulnerabilities are strictly related to Enterprises. The importance of tests made by researchers and enthusiasts is much greater for Microsoft and AV vendors. So, all kinds of tests are welcome.
 

cruelsister

Level 43
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,224
who are not professional researchers or security testers that test do not make videos. They make demos at conferences such as Wild West Hackin' Fest or Black Hat
And some actively do both.
They are self taught
God, that would have been soooo much cheaper.
 

bazang

Level 6
Jul 3, 2024
298
Home users should not panic after reading/watching tests made by researchers and enthusiasts, because the vulnerabilities exposed, have a very small impact on the infection rate at home.
They should not, but when it comes to all things related to cybersecurity, the average user is vulnerable to panic due to the Ignorance Tax.

OK, it seems that I misunderstood your meaning of enthusiasts with common YouTube testers.
A lot of those testers are data point. Some are pure entertainment. An exception is Leo. He is not popular here because he does not make videos that a lot of MT members find acceptable. The reason is that they are not at his level nor his approach to Microsoft Defender. Even though he is more than capable of performing methodical, carefully documented and explained testing, that is not his objective. He takes the approach of an average Windows user that does not understand or know Microsoft Defender beyond the basics. Leo is one of those professional testers who is not going to explain in-detail what is being done in the video. He expects the viewer to either figure it out or to educate themselves to figure it out.

And some actively do both.
Yes they do.

Some of the people I know have a their primary relationship with a keyboard and their entire social lives are 100% digital. They're the ones that go to Black Hat and the entire time they are on their device. They attend presentations and never look up. Their fingers constantly doing the klackity-klack. Somehow they absorb stuff. I suppose they have that capacity to multi-focus without looking or tuning other things out.

God, that would have been soooo much cheaper.
You attended Dartmouth undergrad when tuition & housing during the era of what, ~ $20,000 per year? Not sure if you did a grad school program afterwards or not.

Did you not do the Rhodes Scholar thing, or was it Sorbonne?

As undergrad were you partial aid, no aid, or a merit full-ride scholarship recipient?

Self taught is all about paying the Ignorance Tax with a lot of patience, frustration management, dedication, perseverance, and pounding it out on the keyboard. In a few words, sacrifice and hard work. There's a good bit of neuroticism sprinkled on that process (anyone that completes any degree program has the basic elements of neurotic behaviors - it is not anything derogatory; it is the behaviors that matter and not the labels). You of all people know that entire drill very well.
 
  • Like
Reactions: [correlate]

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
An exception is Leo.

Yes, many people like such retro-tests to recall the times from 15 years ago. The malware world was much simpler and much more understandable. :)

Edit.
Fortunately, he published many good videos unrelated to AV comparison testing. (y)
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
Some readers may think that I am prejudiced against Leo's tests. So let's consider three important facts:
  1. The infection rate of tested AVs is incredibly low - typical for several days-old samples. For Kaspersky, the infection rate is close to 0. A similar effect can be seen for AV-Test samples in the non-real-world part of tests (reference set).
  2. Only EXE files are tested.
  3. Files are executed on the system (non-web threats).
In such tests, the differences between AVs follow mainly from the completeness of the AV signatures (just like 15 years ago).
I can understand that such tests can still be useful for some people, especially when the infections commonly propagate via non-web sources. If I recall correctly, a few years ago MT members from Ukraine reported such a situation, related to sharing pirated software on USB drives. In many countries, the software is hardly available from legal sources and there are no anti-piracy regulations.
 
Last edited:

bazang

Level 6
Jul 3, 2024
298
Some readers may think that I am prejudiced against Leo's tests. So let's consider three important facts:
  1. The infection rate of tested AVs is incredibly low - typical for several days-old samples.
Most testers are trying to prove a specific point. Their objective is not to say "It is unlikely you will be targeted or infected, even through your own actions." Why? Because they want to prove their specific point, despite the low probabilities of system infection reality.

2. Only EXE files are tested.
Leo has talked about this before stating that the average user only downloads or launches the .exe or .msi file types. He knows all the various malware types and file types. He is not interested in demonstrating that. He just collects .exe for his demonstrations. He also knows that if a security software publisher's behavioral protections do not properly handle .exe, then those protections are very likely to fail at containing malicious scripts. That is not a given, but I understand Leo's logic. In most cases he is correct but his testing is just many others - it is very specific as opposed to being comprehensive.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
I expect that @cruelsister shares Leo's general criticisms of Microsoft Defender.
One can share the @cruelsister criticism based on her video:
This is a good example, that independent testers can show something interesting.

One can criticize Defender based on professional tests which show that Defender free (on default settings) is not among the top AVs.

One can criticize Defender based on my POC, for example:

Leo's general criticism would be a mistake. Based on his videos, he has the right to criticize only the completeness of the Defender's signatures.
 
Last edited:

Game Of Thrones

Level 6
Verified
Well-known
Jun 5, 2014
275
In India and SE Asia, infections from USB flash drive sharing is rampant.

Not all infections or attacks happen via a file download from a network and execution. That is a real-world fact.

How malware is delivered is a moot point in testing unless the objective is to show how a solution protects at the delivery stage. Default deny solutions predominantly act to block execution or interrupt the malicious run sequence within the post-download or post-exploit environment.

yeah, we have a lot of infected USB devices here too, we tested, and Bitdefender, Eset, and Kaspersky did not care where the file came from and used their cloud and full protection layers on every unknown and new file no matter the entry point. Bitdefender uses its cloud on execution but Eset and Kaspersky use it on access.

Windows Defender is somehow weak in terms of this entry point, in default USB devices can infect a system, iv seen some in people's laptops and PC in shops and copy stores and stuff, and the attack chain gets interrupted when that flash drive arrives in a system with Eset, Kaspersky, BitDefender installed.

the beast is Bitdefender, back in the day (many years ago) there was a virus that kept files hidden in USB drives and in those days Kaspersky, Eset, and Bitdefender detected it. they were the first. all the security solutions just detected the virus and deleted it. but Bitdefender was a surprise, it not only detected the virus but repaired the USB drive and made the files unhidden and like a normal USB drive. if you see in tests Bitdefender and Kaspersky are the two best in repairing an infection.

windows defender? it needs the point of entry to have a better repair or it will just react on simple mechanisms (detect and delete)
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
Windows Defender is somehow weak in terms of this entry point, in default USB devices can infect a system, iv seen some in people's laptops and PC in shops and copy stores and stuff,...

Yes, that issue can be important, when people often share/execute files without an Internet connection. This is still the case in many countries, and Defender + Windows Firewall cannot be a good protection there. It is a kind of paradox that Defender free + Windows Firewall works best in rich countries, where most people use legal software and could easily afford to buy a top AV.
 
Last edited:

mlnevese

Level 28
Verified
Top Poster
Well-known
May 3, 2015
1,741
Yes, that issue can be important, when people often share/execute files without an Internet connection. This is still the case in many countries, and Defender + Windows Firewall cannot be a good protection there. It is a kind of paradox that Defender free + Windows Firewall works best in rich countries, where most people use legal software and could easily afford to buy a top AV.
That's exactly why I say the best antivirus depends on situation. I have seen in my work occasions where a person had to get files from their clients, often pdf files that can be infected, using an offline laptop, going to the client's place and copying from a USB drive for instance. Any security in that machine would often be updated sometime in the morning then only in the next day when he returned with the files.

In this case any solution that strongly depends on internet connection is useless. He'd be much better with ESET, Avast or Bitdefender that have strong local signatures.

Mostly I see people in this and other forums considering Internet connection will be available everywhere, but that's not the case in many places.

Also most seem to dismiss other infection vectors like usb drives only considering infections from the net, which in many places, will not be the main source of infection.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,510
In this case any solution that strongly depends on internet connection is useless. He'd be much better with ESET, Avast or Bitdefender that have strong local signatures.
That is true.
Of course, using Defender would still be OK after enabling ASR rules (mainly independent of Internet connections).
However, ASR rules will block pirated software (not a solution for everyone).
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top