App Review Windows Defender Firewall Critique Part 2

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
Ophelia

simmerskool

Level 38
Verified
Top Poster
Well-known
Apr 16, 2017
2,783
slightly off topic: "untrusted files are simply blocked." I'm running AppGuard 6.7 again, this time successfully -- just needed to get comfortable with it. As Shadowra mentioned in recent video test, blocks everything (paraphrase) but whatever should be running is running -- fwiw my current experience.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
No, not at all. FirewallHardening will indeed add a bunch of Rules for various things (probably one of the best would be a block on PowerShell Outbound requests). However, as in the video, if WF is disabled FIRST it does not matter what rules are in place (oh, and WFH does not include a rule for this malware).

Yes and No.
The test was done without the first infection stage (malware delivery). So there are two possible scenarios:
Yes: FirewallHardening cannot help when the initial malware is downloaded by the user and the attack does not abuse outbound connections of LOLBins.
No: FirewallHardening can prevent the malware (from the video) if it is delivered as a payload via the outbound connections using LOLBins.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
slightly off topic: "untrusted files are simply blocked." I'm running AppGuard 6.7 again, this time successfully -- just needed to get comfortable with it. As Shadowra mentioned in recent video test, blocks everything (paraphrase) but whatever should be running is running -- fwiw my current experience.
AppGuard can block the particular malware from the video. However, it can be bypassed on the default settings when using shortcuts and scripting methods. You must restrict popular LOLBins for more protection. (y)
 
  • Thanks
Reactions: simmerskool

simmerskool

Level 38
Verified
Top Poster
Well-known
Apr 16, 2017
2,783
AppGuard can block the particular malware from the video. However, it can be bypassed on the default settings when using shortcuts and scripting methods. You must restrict popular LOLBins for more protection. (y)
thanks! & you have a very good app for that. :D
 
  • Like
Reactions: Dave Russo

bazang

Level 8
Jul 3, 2024
364
A common failing of tests like this.
In India and SE Asia, infections from USB flash drive sharing is rampant.

Not all infections or attacks happen via a file download from a network and execution. That is a real-world fact.

How malware is delivered is a moot point in testing unless the objective is to show how a solution protects at the delivery stage. Default deny solutions predominantly act to block execution or interrupt the malicious run sequence within the post-download or post-exploit environment.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
Testing malware without the delivery part has some pros and cons. If one wants to show a weak point of the tested security layer (like @cruelsister), skipping malware delivery makes the test simpler and easier to understand. I did a similar thing in my tests on dismantling AVs.
The cons follow from the fact that it is hard to conclude the real-world danger because other security layers can partially cover the exposed weak points. For example, the particular malware made by @cruelsister would be blocked on download by SmartScreen in Edge, or on execution by Windows SmartScreen - if the file was downloaded directly from a malicious URL.
 

bazang

Level 8
Jul 3, 2024
364
Testing malware without the delivery part has some pros and cons.
The entire point of not testing the malware delivery part is that it is not relevant to the test.

By not testing the delivery part, shows what would happen if the detection or blocking while being delivered failed.

This is an extremely simple concept but there are those that argue "any test that does not show the delivery part is an invalid test." That statement is not accurate and the people repeating it over-and-over have an agenda. That agenda is to discredit @cruelsister 's tests.

The cons follow from the fact that it is hard to conclude the real-world danger because other security layers can partially cover the exposed weak points.
That is a weak argument. As I have stated many times, the spreading of malware by shared USB flash drives happens at a large scale in south central and southeast Asia. The only way to test such a scenario is to either launch the malware from the USB drive or, what is most typical, from the desktop. Hundreds of millions of people use PCs in that region of the world but they do not have reliable internet. They solve this partially by sharing USB drives.

The whole argument "A test must also include the malware delivery (meaning internet download) to be valid" is a very self-centered, first-world perspective.

It is a completely false statement to say "Your test is invalid because it did not test every layer of the product." OK, so what about products that only have a single layer of protection? What about the case where Smartscreen fails to block? What then? Only certain people here at MT will say "Well, that is not real-world because the tester turned off Windows Smartscreen and other Windows Security protections. If they did not disable them, then the test would have failed." LOL, such statements are ridiculous and reveal a lack of basic understanding of test methodology. But what is really going on is certain people here take every opportunity to attack some aspect of any test demonstration that @cruelsister makes because their objective is to discredit the test, and thereby discredit @cruelsister herself.

Nobody here better ever go to a BlackHat conference. They will see proof-of-concept (POC), vulnerability attacks, and testing that they'll have to wash their eyes out with Clorox bleach afterwards. A significant amount of demonstrations at hacker and pentest conferences involve disabling aspects of the operating system - or more often - also includes slightly obsolete builds of the OS or software which are exploited.

"What if" or "What could potentially occur..." testing if this or that fails (by disabling it) to protect is a standard, widely-accepted industry pentest practice. Security layers are not infallible. They can be bypassed. So honest and accurate testing of a focused aspect of a system can be done by disabling a security feature, a security layer, or devising a test that does not utilize that feature or layer. It is a completely legit form of testing.

These are very simple concepts. Children on a schoolyard playground can understand them.

When tests are performed and demonstrated, it is not the responsibility of the person(s) performing the tests to explain all the caveats to the test. Any claim otherwise just ain't true. The responsibility is on the viewer to figure it out. If they do not have that knowledge then it is on them to gain the knowledge to completely understand what the test shows - and what it does not show. What the events shown mean or imply, and what they do not.

It is not @cruelsister 's responsibility to educate every viewer on the full details of her demonstrations. It is up to the viewer to figure out the limitations, the exceptions, the corner case & specificity of the test.

It is for this reason that neophytes are like deer caught in the headlights at a BlackHat conference. The difference is that they are there to learn and many soon get it. Whereas the intent at MT is to criticize tests to discredit them, and the person who created and performed the test.
 
Last edited:

cruelsister

Level 43
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,224
A nice comparison for business AVs can be found in the MRG Effitas "360° Assessment & Certification" tests, in the sections "Real Botnet" and "Banking Simulator" (Eset and Malwarebytes on top).
Glad that you referenced MRG (although the same is essentially true for the other Majors).

If you view their testing Methodology it would be seen that testing a given malicious sample is accomplished by downloading the file via Chrome and then running the file from the DOWNLOAD DIRECTORY. Having Chrome save the downloaded file to that directory is an arbitrary choice as it could have just as easily been saved anywhere else (like the Desktop directory).

Also, they are testing specific products without either the help of or mention of SmartScreen; the test would be flawed it they did. Also not at all mentioned was the use of a Blocklist (the presence of which is barely an inconvenience for a malicious coder as (like Confluence Networks in the video) something can work as proxy for a Virginal website containing the malicious file.

Finally malware can arise from anywhere (direct download, infected USB, worms over the Network, torrented files, email attachments, etc). The methodology in my videos parallels that which is used by the professional testing sites and most (if not all non-pro's). A true "Real World" test is acquiring a malicious file from ANYWHERE, plopping it on a system in ANY directory, and testing it against a product utilizing only the defense mechanisms that are installed by that product.

Oh, and "in the sections "Real Botnet" and "Banking Simulator" (Eset and Malwarebytes on top)"- ESET did not get Top marks in my recent test of it. but I suppose some tests are Crueler then others.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
@bazang,

I did not write that:
  • all tests must include a delivery part,
  • all malware must be web-originated,
  • the test is invalid because the delivery part is missing.
  • the tester must educate others,
  • etc.
If you did not notice, I made similar videos without a delivery part.

These are very simple concepts. Children on a schoolyard playground can understand them.
I would not use such words. They will not help build your authority.

I am confused with your post. You answered questions in a way that could suggest they were asked by me (but they were not). Did you watch my videos? For example:
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592

@cruelsister,​


I agree with everything you wrote except that your video is a real-world test. :)
This does not mean that the test is invalid as a video demonstration. The idea of your tests is similar to several of my tests (it is not an accident because I watched several of your videos). I would not publish my tests if I thought them invalid or unuseful. Of course, my tests were also non-real-world.
 
Last edited:

oldschool

Level 85
Verified
Top Poster
Well-known
Mar 29, 2018
7,698
The cons follow from the fact that it is hard to conclude the real-world danger because other security layers can partially cover the exposed weak points.
Precisely. All tests have to be taken with a grain of salt.
The whole argument "A test must also include the malware delivery (meaning internet download) to be valid" is a very self-centered, first-world perspective.
Who made this claim?
 

cruelsister

Level 43
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,224
One thing that I guess should be mentioned is exactly what the focus of a given test is. For the Professional tests it is against whatever (verified) malware shows up in their Honeypots (or whatever) during a specific time frame (a month, a quarter, a year), and unless the test is more specific (using only ransomware or only data Stealers), it is done without regard to the mechanism by which the malware works.

My testing differs in that they are for the most part testing specific mechanisms without regard to age (they may be very old or they may be very new- freshly coded- and these may or may not be included in any time-constrained Honeypot); this is done to determine if the AM app to be tested will protect against them.

But as with the Pro tests, these tests utilize the "Lowest Common Denominator" method- get the malware from somewhere, place the file somewhere on the system and run the malicious file on that system which has no other protection than the AM app itself.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
For the Professional tests it is against whatever (verified) malware shows up in their Honeypots (or whatever) during a specific time frame (a month, a quarter, a year), ...

It is not so simple. The samples should be representative too. So, the tester must avoid morphed samples, POCs, etc. This is probably the most challenging part of real-world testing. It can also be controversial because most AV testing labs do not share details of how the samples are chosen. One has to believe that the AV vendors and leading AV testing labs can cooperate to make those tests reliable. I noticed that the real-world scorings of AV-Test, AV-Comparatives, and SE Labs can differ significantly over a year. So, their testing methodologies are not perfect.

Edit.
It is possible that reliable AV testing methodology does not exist and the test scorings are so reliable as predicting the weather for a month. :)
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
I should probably explain why I cannot treat the video from this thread as a real-world test. Simply, the leading AV testing labs use this term for "0-day malware attacks, inclusive of web and e-mail threats".

It is interesting how AV testing labs choose the representative samples. For example, AV-Test registers over 450000 new malware per day and only a small part of registered samples are used in the test.
https://www.av-test.org/en/statistics/malware/

Here is a fragment of AV-Comparatives testing methodology related to finding new threats for real-world tests:
We use our own crawling system to search continuously for malicious sites and extract malicious URLs (including spammed malicious links). We also search manually for malicious URLs. In the rare event that our in-house methods do not find enough valid malicious URLs on one day, we have contracted some external researchers to provide additional malicious URLs (initially for the exclusive use of AV-Comparatives) and look for additional (re)sources.
Another fragment related to the statistical analysis of results:
In this kind of testing, it is very important to use enough test cases. If an insufficient number of samples is used in comparative tests, differences in results may not indicate actual differences in protective capabilities among the tested products. Our tests use much more test cases (samples) per product and month than any similar test performed by other testing labs. Because of the higher statistical significance this achieves, we consider all the products in each results cluster to be equally effective, assuming that they have a false-positives rate below the industry average.
https://www.av-comparatives.org/real-world-protection-test-methodology/

For example in the latest test, 13 AVs belong to the same cluster so they must be treated as equally effective, even though some of them detected all tested samples and some others missed 4 samples. The statistical model AV-Comparatives used, says that the differences in missed samples are not real. Those differences can appear with high probability as artifacts of the testing methodology.

1726786297362.png


https://www.av-comparatives.org/tests/real-world-protection-test-february-may-2024/

Please forgive me if this post is off-topic, but most readers usually do not realize such important details.
 
Last edited:

cruelsister

Level 43
Thread author
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,224
The cons follow from the fact that it is hard to conclude the real-world danger because other security layers can partially cover the exposed weak points. For example, the particular malware made by @cruelsister would be blocked on download by SmartScreen in Edge, or on execution by Windows SmartScreen - if the file was downloaded directly from a malicious URL.
My recent videos are about testing a specific product against a specific malicious mechanism of action. As it being hard to conclude the Real Worldliness of such a test, a better example would be my last video on Worms:

In that one the malware employed that were ignored by Zone Alarm were neither new in Age nor in Mechanism (and are definitely in the wild), and for all I know may have actually been included in the 1000's of samples used in the pro tests.

Also, whether or not something like SmartScreen would have detected it if they were downloaded is inconsequential as when testing a specific product defenses one must NOT include any additional type of malware defense that are extraneous to the product being tested- in other words if a SmartScreen popup did occur, the proper procedure would be to override the warming and run the file anyway (as we are testing a specific product and NOT a specific product with an assist from Microsoft).

Finally it had been written by another (not you) on MT that running a file from the Desktop folder is somehow invalid. This is obviously silly as a file must be run from somewhere- from the Download folder (in the case of the file being downloaded), from within a folder from whatever Email client is used for email attachments, or from whatever folder torrented files are stored, etc.

So although such a video may not be worth the time to watch (and as I normally get only about 100 views this seems to be the case), stating that they are not in any way Real World as they are not as inclusive as a pro test has questionable validity.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,592
So although such a video may not be worth the time to watch (and as I normally get only about 100 views this seems to be the case), stating that they are not in any way Real World as they are not as inclusive as a pro test has questionable validity.
No offense. My objection is related only to the terminology. If you use the term "Real World test", many people will understand it as a test on "0-day malware attacks, inclusive of web and e-mail threats". It would be better to use terminology consistent with professional tests. In your case, using the "Malware Protection test" (or simply the malware test) would not cause misunderstandings. For example, AV-Comparatives use the term "Malware Protection test" when using in-the-wild samples executed on the system (like you do):
In the Malware Protection Test, malicious files are executed on the system. While in the Real-World Protection Test the vector is the web, in the Malware Protection Test the vectors can be e.g. network drives, USB or cover scenarios where the malware is already on the disk.
https://www.av-comparatives.org/tests/malware-protection-test-march-2024/
 

bazang

Level 8
Jul 3, 2024
364
@bazang,

I did not write that:
  • all tests must include a delivery part,
  • all malware must be web-originated,
  • the test is invalid because the delivery part is missing.
  • the tester must educate others,
  • etc.
Apologies. I did not mean you stated it. I was referring to others. Sorry for the confusion. I am fully aware of all your tests. The methodology you use. The specifics of the tests themselves.

Finally it had been written by another (not you) on MT that running a file from the Desktop folder is somehow invalid. This is obviously silly as a file must be run from somewhere- from the Download folder (in the case of the file being downloaded), from within a folder from whatever Email client is used for email attachments, or from whatever folder torrented files are stored, etc.
I can take you to the U.S. government lab where a completed air-gapped LAN is attacked at a specific point, and the lateral and vertical hoobahjoob results in all systems being infected with malware running in randomly chosen file directories or executed right in memory.

The hoobahjoob is a real thing, without any further specification.
 

bazang

Level 8
Jul 3, 2024
364
Who made this claim?
I do not recall the user names but it has been stated quite a few times here at MT. Perhaps @cruelsister can provide a point to a post or the posts?

So, the tester must avoid morphed samples, POCs, etc.
90+ % of all malware is morphed. It is all morphed. The dinguses create a DevOps pipeline and bot that craps out morphed variants at the rate of X samples per hour. Those samples get uploaded to a platform that sends them along to various other platforms with different means to disperse. It ensures massive malware dispersal to the four corners of the Earth.

If any AV or security solution cannot effectively cope with morphed malware at least 95 or better % of the time, then the user should find a different solution.
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top