AVLab.pl Modern protection without signatures – comparison test on real threats (Advanced In The Wild Malware Test)

Disclaimer
  1. This test shows how an antivirus behaves with certain threats, in a specific environment and under certain conditions.
    We encourage you to compare these results with others and take informed decisions on what security products to use.
    Before buying an antivirus you should consider factors such as price, ease of use, compatibility, and support. Installing a free trial version allows an antivirus to be tested in everyday use before purchase.

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
How many unknown samples are included in AVLab tests?

The question should be asked because most AVs usually miss 0 samples in AVLab tests.

It seems that a similar methodology to AVLab tests can be seen in AV-Comparatives Malware Protection tests. The samples are executed from the disk and AVs are on default settings.

Let's compare the results of Avast, Avira, Defender, and Malwarebytes in AV-Comparatives tests.

AV-Comparatives Malware Protection 2021 few-weeks-old prevalent samples (about 20 000 total samples)
Avast+Avira ............. 4+4 missed samples
Defender ....................18 missed samples
Malwarebytes ........... 22 missed samples

We can also gather the results of Avast and Avira for a similar number of samples in AVLab tests:
AVLab (about 18 000 samples in 13 tests, January 2020 - January 2022)
Avast+Avira ............... 0+1

Unfortunately, only Avast and Avira are commonly tested both by AVLab and AV-Comparatives.
The number of missed samples strongly suggests that in the AVLab tests the unknown samples are very rare for Avira and Avast (even rarer than for few-weeks-old prevalent samples). The results of Avast and Avira are typical in AVLab tests for many tested AVs.

From the results of AV-Comparatives tests, it follows that the difference in protection between Defender and Malwarebytes is very small: about 4 samples per 20 000.
If we decrease the number of unknown samples, then we can expect that this difference in protection should be similarly small. When we have 0 unknown samples then both AVs will miss 0 samples.

Now, let's look at the current AVLab results (missed samples):
Malwarebytes ......... 1 per 1834 samples, or proportionally ~ 11 per 20 000 samples
Defender ............. 397 per 1834 samples, or proportionally ~ 4338 per 20 000 samples
The difference in protection: (4338-11) per 20 000 samples

The number of missed samples by Defender is horribly high compared to Malwarebytes, which is in clear contradiction to the results of AV-Comparatives.
Now the difference in protection is over 4000 samples and for AV-Comparatives it was only 4 samples.

Hmmm!!!
We can call this result truly anomalous!!! It is obvious that in the AVLab test many samples are unknown to Defender, even if they are known to Avast and Avira.

Summary.
In the AVLab tests, we have three groups of AVs (at least).

  • Group 1 (most of the tested AVs) - almost all samples are known to these AVs,
  • Group 2 (Defender, Trend Micro) - many some "dead" samples are unknown to these AVs,
  • Group 3 (Comodo, Emsisoft,...) - these AVs use file reputation, HIPS, etc., so they do not care much about "dead" samples.
In such a situation is very possible, that the missed samples are unknown to Defender(or Trend Micro), because they never hit the customers and these vendors are simply slow with adding such ("dead") samples to the cloud signature database or behavior-based detections in the cloud.

I added Trend Micro to Group 2 because it got a similar result as Defender when tested by AVLab in May 2020.
https://avlab.pl/en/results-may-2020/

One could suspect that AVLab tests are flawed, but kinda similar behavior is also visible for Trend Micro in the AV-Comparatives tests:

Consumer Tests
AV-Comparatives Malware Protection 2021

........................... March ..... Sep
Avast ................... 1 ............ 3
Avira ................... 2 ............ 2
Bitdefender ......... 0 ............ 2
Eset .................... 10 .......... 18
Kaspersky ........... 4 ........... 10
Malwarebytes ..... 6 ........... 16
McAfee ............... 0 ............. 0
Microsoft ......... 15 ............. 3
Norton ................ 0 ............. 0
Panda ................. 2 ............. 7
TrendMicro .... 103 .......... 121
Vipre .................. 0 .............. 2
Samples .... 10013 ...... 10029

Over 200 missed samples per 20 000 total samples is an anomalous result, but still many times lower than for Defender in AVLab test (over 4000 missed samples per 20 000 total samples). Furthermore, Defender missed only about 20 samples per 20 000 total samples in AV-Comparatives tests.:unsure:

One of the MT members (@McMcbrad) asked the Trend Micro staff about this behavior. The answer was that those missed samples were not important for the security of TrendMicro customers. That is why I called such samples "dead".
TrendMicro probably does not care to add quickly the signatures of relatively old "dead" samples, and Defender can add most of such signatures in one or two weeks. Avast, Avira, and probably many AVs can manage such samples in the cloud very quickly (via fast signatures or behavior-based detections). So even when the customers were infected in the wild, the Malware Protection test cannot reflect this in the test results.

Why there can be many "dead" samples?
From the Microsoft reports it follows that most malware can hit only once (over 90% infections). After one shot it becomes 'dead' and it is not used in the attacks anymore. The AVs are very quick to add fast signatures or behavior-based detections in the cloud. In most cases, the fast signature can be created in several minutes after the successful infection. Most often, the attackers use a new polymorphic version of malware which can give far more chances to infect another computer. If the AV is slow with managing such "dead" samples, many of them can be counted as misses in the Malware Protection tests.

Why the "dead" samples are not so important in the Real-World tests?
Such tests use 0-day or 1-day malware, so the AV vendors have no time to manage the "dead" samples.
The problem of "dead" samples is most important for a few days malware and in the less degree for a few weeks malware. Such malware samples are used in Malware Protection tests.

Can the "dead" samples be reused?
I am not sure. If so then probably in cracks and pirated software. But, the chances of infections via "dead" samples are very low.

Does it mean that Defender on default settings is OK in businesses?
No. Defender in these settings is not good at fighting lateral movement. This can be partially seen in the AVLab. The "dead" but unknown samples can mimic the fresh unknown samples used in the highly targeted attacks. These malware samples can be executed at Level 3 as payloads without the MOTW, bypassing Defender's BAFS and SmartScreen. Other AVs can have more chances, because they have got some additional features like Network Protection, file reputation lookup, HIPS, etc.
The AVLab seems to mimic the business scenario only for Defender when there are many "dead samples" and the BAFS feature is disabled. It is possible that a similar situation can be also for Comodo and Emsisoft (Group 3), but these AVs can block the unknown "dead" samples by the reputation file lookup.
The perfect results for several other AVs can probably reflect only the fact that in one or two days, they can complete the detections in the cloud by borrowing "dead" samples from other AVs or hunting the payloads. So, in the test with few-days-old samples, almost all malware can be detected/blocked by the cloud backend (fast signatures + behavior-based detections).

Post edited.
Added Group 3 to underline the distinctive features based on file reputation, HIPS, etc.

Edit 1.
I examined other AVLab tests since the year 2020. Defender and Trend Micro missed several samples in some tests and Defender blocked all samples in some others. So, the "dead" samples can be probably a reason for missing several samples by Defender and Trend Micro in these tests, but not for a few hundred Defender misses in the current test.:(

Edit 2.
My reasoning about unknown samples was based on the assumption that in the AVLab tests the samples were executed without MOTW. This followed directly from the Defender results (BAFS did not work in the current test). Anyway from the testing methodology, it follows that files must have MOTW, so the BAFS issue was caused by another reason. In this case, we must skip the Avast from considerations because for EXE files with MOTW it uses CyberCapture (detonation in the sandbox). This feature makes a big difference when comparing AVLab with AV-Comparatives. Fortunately, this does not change the main reasoning - still the 4 samples missed in AV-Comparatives tests by Avira are slightly greater than 1 sample missed by this AV in AVLab tests. So, the number of unknown samples in AVLab tests is comparable (probably smaller) than in AV-Comparatives tests.
See also:
https://malwaretips.com/threads/mod...d-in-the-wild-malware-test.112630/post-977993
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
@Andy Ful

I like how you take the calm, cool, and reflective approach and ask the hard hitting questions that should to be asked, rather than than just blindly accepting the results as an unequivocal declaration (y)

Clearly you do your homework.

AVLab testing is special. It shows things about AVs that cannot be seen in other tests. But, this also makes AVLab tests hard to correctly interpret.
For example, except for AVLab, only Malware Hub could mimic the behavior of Defender (and probably other AVs) against lateral movement. Other tests are based on a few custom samples, so they are not convincing. The categorization of the Levels 1,2,3 in AVLab is interesting and valuable. AVLab also shows that chances of infection by a few days old malware is very low.
Another innovative approach can be visible in MRG Effitas that uses such categories as "Behaviour block"/ "Auto block", and "Blocked in 24 hours". The category "Blocked in 24 hours" can mimic the post-infection features of the AVs. Of course, there is no perfect testing methodology and the results are usually misunderstood by readers. The common mistake is comparing the results of Real-World tests with Malware Protection tests or ignoring the random factor in the test results.

Edit.
According to MRG Effitas tests (from the year 2021) the AV which uses most frequently the behavior blocks is AVira. The "Behavior block" is similar to Level 3:
The test case is marked as “Behaviour Blocked” if the security application blocks the malicious binary when it is executed and either automatically blocks it or postpones its execution and warns the user that the file is malicious and awaiting user input.

It is most visible in the latest test:

Security Applications Tested •
  • Avast Business Antivirus 21.11.2683
  • Avira Antivirus Pro 1.1.51.20724
  • Bitdefender Endpoint Security 7.4.2.130
  • ESET Endpoint Security 8.1.2037.2
  • F-Secure Computer Protection Premium 21.10
  • Malwarebytes Endpoint Protection 1.2.0.954
  • Microsoft Windows Defender 4.18.2111.5
  • Symantec Endpoint Protection 14.3.5413.3000
  • Trend Micro Security 6.7.1560/14.2.1310
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
The best results in Malware Protection tests or in any test that includes only PE files (EXE, DLL, etc.) will get the AVs that use file reputation lookup. This is rare when AVs are tested on default settings (Comodo, Norton, Emsisoft).
But several AVs can use it after some tweaks (Avast, AVG, Defender, Kaspersky Internet Security).
Avast and AVG can use Hardened Mode. KIS can be configured to highly restrict the files unknown to KSN.
In Defender, this can work most often via one of the ASR rules. But, the strongest solution would be ISG (Intelligence Security Graph) option in Microsoft Defender Application Control (MDAC), which can be also applied on Windows 10 Home. The disadvantage of this solution is that Windows PRO/Enterprise is needed to create/manage the policy file + many false positives (software auto-updates must be disabled). Anyway, Microsoft recommends it in Enterprises. The false positives issue can be solved by submitting software installers/updaters to Microsoft. The false positives are usually removed in one day.

Edit.
It is worth knowing that file reputation lookup can be bypassed via DLL hijacking when file reputation is narrowed to EXE files (DLLs not included). So, the strongest solutions have to include also PE libraries (Comodo, Kaspersky, Defender ISG). Norton includes DLLs only if they are downloaded from the Internet or USB drives.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
@Adrian Ścibor,

Something wrong is with Defender's BAFS feature. It obviously worked in January, March, and May 2021. But, it did not work properly in September 2021 and it did not work at all in January 2022.
If BAFS did not work that it is possible that also the cloud backend did not work properly.:unsure:

Edit.
Until now, I was convinced that the samples did not have MOTW after downloading to disk. So, Defender's BAFS was ignored but the cloud backend could still work. Yet, from the testing methodology, it follows that the files were downloaded by Google Chrome from the web, so they had to get MOTW. Yet, the BAFS still did not work for some reason. Because the BAFS is a part of the cloud backend, it is possible that the cloud support did not work properly in the current test.
It is easy to see if BAFS works or not. When it works, then almost all EXE samples are detected/blocked at Level 1.(y)
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
would it have made a difference if safe search was enabled or not on Chrome?
No, if you have meant SafeSearch:

According to test methodology, the malware samples are downloaded from custom (non-malicious URLs), so also some security Chrome extensions based on URL blacklisting would not help.
 
Last edited:
  • Like
Reactions: Mjolnir

Adrian Ścibor

From AVLab.pl
Thread author
Verified
Well-known
Apr 9, 2018
214
We can also gather the results of Avast and Avira for a similar number of samples in AVLab tests:
AVLab (about 18 000 samples in 13 tests, January 2020 - January 2022)
Avast+Avira ............... 0+1

I have got something diffrent information when it comes to fail result for Avira since 2020. For example:

January 2022> Recent Results - Advanced In The Wild Malware Test
10 samples missed by Avira.

November 2021: 33 missed by Avira: The November 2021 Results - Advanced In The Wild Malware Test

...
Additionally, the value to vendors is not only the results, but also the malware detection errors found. Thanks to that, almost every two months our tests fix something in the tested products. This makes you more secure.
...

The discussion keeps coming back from Andy every time then MD is tested. In his opinion the MD should not be tested on the default settings. In my opinion is completely different -> if a vendor is not willing or able to configure their antivirus better by default, let them not expect it from a normal user who is not technically literate. To compare several products with each other, you need to give them the same conditions. I don't want to repeat the same thing over and over again :)
 

Adrian Ścibor

From AVLab.pl
Thread author
Verified
Well-known
Apr 9, 2018
214
would it have made a difference if safe search was enabled or not on Chrome?

Hi, the SS technology is enabled in Chrome. Hovewer we do not download malware from their oroginal source, instead of this we use own DNS server to generate different domain for every malware samples, to bypass / cheat blacklist IPs/malware domain. Why? Because it's harder for AV that way to detect sample, and so we can somehow show malware downloads from the "0-day domain".

*EDIT*

I do not know how they did on AV-C / AV-T - malware downloading - because it is probably hidden in their methodology. I'm not sure, please correct me, if I'm wrong. I think one of these labs uses EDGE browser.
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
I have got something diffrent information when it comes to fail result for Avira since 2020. For example:

January 2022> Recent Results - Advanced In The Wild Malware Test
10 samples missed by Avira.

November 2021: 33 missed by Avira: The November 2021 Results - Advanced In The Wild Malware Test

...
Additionally, the value to vendors is not only the results, but also the malware detection errors found. Thanks to that, almost every two months our tests fix something in the tested products. This makes you more secure.
...

The discussion keeps coming back from Andy every time then MD is tested. In his opinion the MD should not be tested on the default settings. In my opinion is completely different -> if a vendor is not willing or able to configure their antivirus better by default, let them not expect it from a normal user who is not technically literate. To compare several products with each other, you need to give them the same conditions. I don't want to repeat the same thing over and over again :)

The results were different a few days ago, for sure (checked a few times). But, OK. Let's update my post.

Let's forget for a while about the last test and look at the test results from the years 2019-2021:

AVLab (over 17 000 samples in 16 tests, July 2019 - November 2021)
The table contains the missed samples in several tests ("x" means that AV did not participate).

.............................MONTH:.. J......S.....O...N....J....m...M...J....S....N....J...m...M...J....S....N..
Avira Pro (Prime) ............... 0....12... 0...0... 0... 0... 1... 1... 1... 0... 0...0... 0... 0... 0...33 = 48
Defender ............................ x ... x ...17.. 0 .. x.. 20.. x... x... 0... x... 8... 0... 0 ...x... 2... x = 47
TrendMicro ........................ x ... x ... x ... x ... x.. 2..
158 x ... x ...x ...x ...x ...x ...x ...x ... x = 160
F-Secure .......................... 103.. x ...x ... 0 ...x ...x ....x ...x ...x ... x... 0... x ...x ...x ...x .. 0 = 103
Webroot ............................ x .... 0 ...x ... 0 ...0 ...0 ...0 ...1 ...0 ... 0... 0... 0...0... 0... 0.. 3 = 4

The results for Avira and Defender look pretty much normal.
But, there are anomalous spikes in missed samples, especially for TrendMicro and F-Secure.
1646678067452.png


1646678130689.png
Similar spikes can be seen for TrendMicro in the AV-Comparatives Malware Protection tests.
The results for Webroot are also anomalous compared to many other tests (including Malware Hub).
At this moment I cannot include the test from January 2022, because we have not cleared out why BAFS did not work for Defender in January 2022 and worked in some earlier tests. From the testing methodology, it follows that it should work.

So my conclusions are similar to the previous post, except that more AVs will be in Group 2:
In the AVLab tests, we have three groups of AVs (at least).
  • Group 1 (Webroot) - almost all samples are known to these AVs,
  • Group 2 (Avira, Defender, F-Secure, Trend Micro) - some "dead" samples are unknown to these AVs,
  • Group 3 (Avast, Comodo, Emsisoft,...) - these AVs use file reputation, HIPS, detonation in the Sandbox, etc., so they do not care much about "dead" samples. I put Avast here because when files are executed with MOTW, Avast detonates suspicious files in the sandbox.
In such a situation is very possible, that the missed samples are unknown to Group 2, because they never hit the customers and these vendors are simply slower with adding such ("dead") samples to the cloud signature database or behavior-based detections in the cloud.
Still, I do not think that "dead" samples could explain the 22% missed samples for any popular AV.

The serious issue with the current test is that Defender's BAFS should work, but it obviously did not work.


Generally, the AVLab tests are interesting due to the methodology that is different from other AV testing labs.
The AV-Comparatives had also some issues with Defender results (they removed Defender from one report).
When I post that some results are anomalous, that does not necessarily mean that they are wrong.
My questions are not intended to bash the tests but to understand them.(y)

Edit.
I will look at the protective layers of Webroot. If it has some features related to MOTW and reputation features, then Group 1 would be empty and Webroot should be included in Group 3.
 
Last edited:

MacDefender

Level 16
Verified
Top Poster
Oct 13, 2019
784
The discussion keeps coming back from Andy every time then MD is tested. In his opinion the MD should not be tested on the default settings. In my opinion is completely different -> if a vendor is not willing or able to configure their antivirus better by default, let them not expect it from a normal user who is not technically literate. To compare several products with each other, you need to give them the same conditions. I don't want to repeat the same thing over and over again :)
I’m not singling out Andy here. When I was casually testing a few AVs behavior blockers I heard from many users who wanted their favorite AV’s settings changed….. particularly ESET which has a dizzying number of options and I’m honestly not convinced half of the Low/Normal/Aggressive switches do anything.

Defender is interesting especially in the consumer case where the performance is so drastically different between the default settings and the tweaked ones, but when testing, I think every product should be tested in its standard configuration for many reasons. First off, the majority of users will not be changing settings, and changing settings from their defaults requires keeping up to date on all the future changes that might conflict with your custom settings. Most users want their AV to be set it and forget it, especially for something like Defender that comes preinstalled.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
Let's look at the Avira results from the period October 2019 - September 2021

.............................MONTH:.. J......S.....O...N....J....m...M...J....S....N....J...m...M...J....S....N..
Avira Pro (Prime) ............... 0....12... 0...0... 0... 0... 1... 1... 1... 0... 0...0... 0... 0... 0...33 = 48

We can see that even 13 tests with several thousand EXE samples, cannot reliably reflect the AV protection.:(:unsure:
 
  • Like
Reactions: oldschool

Mjolnir

Level 2
Verified
Jul 4, 2019
69
Hi, the SS technology is enabled in Chrome. Hovewer we do not download malware from their oroginal source, instead of this we use own DNS server to generate different domain for every malware samples, to bypass / cheat blacklist IPs/malware domain. Why? Because it's harder for AV that way to detect sample, and so we can somehow show malware downloads from the "0-day domain".

*EDIT*

I do not know how they did on AV-C / AV-T - malware downloading - because it is probably hidden in their methodology. I'm not sure, please correct me, if I'm wrong. I think one of these labs uses EDGE browser.
Does the way that you download the samples allow them to receive the mark of the web?
 
  • Like
Reactions: ErzCrz

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
...
Additionally, the value to vendors is not only the results, but also the malware detection errors found. Thanks to that, almost every two months our tests fix something in the tested products. This makes you more secure.
...

We agree here. Also, the fact that there are many AV vendors makes the life of cybercriminals harder.

The discussion keeps coming back from Andy every time then MD is tested. In his opinion the MD should not be tested on the default settings. In my opinion is completely different -> if a vendor is not willing or able to configure their antivirus better by default, let them not expect it from a normal user who is not technically literate. To compare several products with each other, you need to give them the same conditions. I don't want to repeat the same thing over and over again

We agreed some time ago that it is acceptable in AVLab to test Defender free on default settings together with paid business versions of other AVs. You presented an argument that in Poland there are many very small firms. In such firms, there are no network Administrators so no one can use PowerShell to manage Defender's advanced settings. PowerShell is used for remote automation in larger firms. Furthermore, the Defender's paid versions are overpriced.
So, in Poland, most of the very small businesses use 3rd party AVs and only some insist to use Defender free on defaults.

My posts in this thread are related to anomalous results of some AVs. Bearing in mind that such anomalies can be seen also in AV-Comparatives tests (for about 2 years), I suspect that they can be inherently related to all Malware Protection tests. I posted about it in several threads, most often in the relation to TrendMicro results in AV-Comparatives tests. There is nothing wrong with anomalies, just something to investigate.(y)
 
Last edited:

Adrian Ścibor

From AVLab.pl
Thread author
Verified
Well-known
Apr 9, 2018
214
Defender is interesting especially in the consumer case where the performance is so drastically different between the default settings and the tweaked ones, but when testing, I think every product should be tested in its standard configuration for many reasons. ...
@MacDefender @Andy Ful

We can test MD on 1. default settings and 2. the user settings - to compare both configuration. The next edition in May 2022 will be fine, therefore please contact me in April to discuss the configuration and help me with that.

Does the way that you download the samples allow them to receive the mark of the web?

Based on this screen shoot it should be disabled or disabled? :) There appear to be two conflicting pieces of information...

Additional, to be clear, the UAC is disabled as you can read our methodology to bypass the user prompt.

So, finally the MOTW marked is disabled, becasue the file proporties do not have the MOTW information - seems to be necessary to bypass a user prompt.

As far as I remember we implemented it, because it gives the same results as Powershell a Unblock-File / Remove-Item from PS command line to run file without a prompt:

This is the only method I know of to use native Windows processes to run malware/any-untrusted-file downloaded from the Internet. We use the explorer.exe command-line process to run the downloaded file - as if the user was doing it from a Windows file manager and 2-click on the file. Do you know of another method that can be used to bypass the promt message?
 

Attachments

  • Zrzut ekranu 2022-03-8 o 08.03.23.png
    Zrzut ekranu 2022-03-8 o 08.03.23.png
    289.2 KB · Views: 113
  • Zrzut ekranu 2022-03-8 o 08.05.53.png
    Zrzut ekranu 2022-03-8 o 08.05.53.png
    254.9 KB · Views: 100

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
@MacDefender @Andy Ful

We can test MD on 1. default settings and 2. the user settings - to compare both configuration. The next edition in May 2022 will be fine, therefore please contact me in April to discuss the configuration and help me with that.



Based on this screen shoot it should be disabled or disabled? :) There appear to be two conflicting pieces of information...

Additional, to be clear, the UAC is disabled as you can read our methodology to bypass the user prompt.

So, finally the MOTW marked is disabled, becasue the file proporties do not have the MOTW information - seems to be necessary to bypass a user prompt.

You probably have meant SmartScreen for Explorer instead of UAC. UAC ignores MOTW, but SmartScreen is triggered for EXE files with MOTW. The MOTW is embedded in the file Alternate Data Stream (Zone.Identifier) which is added to files downloaded by the web browser (to NTFS disk). One has to remove this ADS, or use the right-click Explorer context menu to unlock the file (this removes this ADS).
  1. Disabling SmartScreen does not remove MOTW.
  2. When the settings are like those in the screenshots from your previous posts (default Windows settings), then the MOTW is added.
In both cases, the Defender's BAFS, Avast CyberCapture, etc., can still work. The Defender behavior with working BAFS is very different from the case without BAFS. In the first case, the L3 blocks are very rare and related to Defender's post-infection protection. The initial malware must bypass all protection in the cloud, except detonation in the Sandbox:
  1. The file is allowed after bypassing cloud protection, but blocked after several seconds. The advanced analysis in the cloud (file is uploaded to the cloud) took more time than the default 10 seconds.
  2. The file is allowed after bypassing cloud protection, but blocked after several seconds/minutes. The advanced analysis in the cloud detected the post-execution behavior as potentially malicious.
  3. The file is allowed after bypassing cloud protection, but blocked after several minutes. The detonation in the sandbox (can last several minutes) recognized the file as malicious.
  4. The payloads downloaded/dropped/executed via the undetected initial malware were blocked (post-infection protection).
All these cases are rare even for 0-day malware. BAFS was intended by Microsoft to block 0-day malware. For totally unknown and innovative malware the first victim will be infected, but usually, after a few minutes, all users are protected against this malware via BAFS.
If BAFS works properly then almost all samples are blocked at Level 1. If BAFS does not work properly, then the cloud backend is not used at Level 1, so most of the samples can be blocked only at Level 2 or 3.

As far as I remember we implemented it, because it gives the same results as Powershell a Unblock-File / Remove-Item from PS command line to run file without a prompt:

Yes, this cmdlet removes MOTW, by removing Zone.Identifier from the file. Does AVLab use it to unblock all the samples downloaded by Google Chrome?
 
Last edited:

Adrian Ścibor

From AVLab.pl
Thread author
Verified
Well-known
Apr 9, 2018
214
You probably have meant SmartScreen for Explorer instead of UAC.
No, I was mean MOTW isteed. The SS works only on EDGE / Internet Explorer browsers (and another too?), therefore it is unused, because we use a Chrome browser in our tests.


UAC ignores MOTW,

My point was that some malware requires UAC to be accepted to run, therefore the UAC it is permanently disabled. From a test point of view, it doesn't matter because the prompt has to be accepted to see what the malware is doing.

Does AVLab use it to unblock all the samples downloaded by Google Chrome?

The files downloaded from Chrome do not contain MOTW information, so there is no need to remove "unblock".
 

upnorth

Level 68
Verified
Top Poster
Malware Hunter
Well-known
Jul 27, 2015
5,458
From a test point of view, it doesn't matter because the prompt has to be accepted to see what the malware is doing.
Very much the same point of view of the Malware Hub on this forum. It's the same thing with macro prompts in office samples. We want the actual AntiVirus ( AV ) product to be tested as much as possible and see how it reacts, not every other single layer in a OS or in other extra added software. Otherwise a genuine AV test will risk become automatic skewed and less accurate and one should start another whole set of different type of tests, methodology etc if the AV itself ain't interesting.
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,511
No, I was mean MOTW isteed. The SS works only on EDGE / Internet Explorer browsers (and another too?), therefore it is unused, because we use a Chrome browser in our tests.
You seem to not see the difference between SmartScreen in the web browser (Edge, IE) and SmartScreen for Explorer (Windows File Explorer). These are different things. SmartScreen in Explorer works system-wide and it is independent of SmartScreen in Edge or IE. It was introduced in Windows 8. SmartScreen for Explorer is usually disabled when testing malware. As I said several times, this does not remove MOTW from files.

My point was that some malware requires UAC to be accepted to run, therefore the UAC it is permanently disabled. From a test point of view, it doesn't matter because the prompt has to be accepted to see what the malware is doing.

That is true, but unrelated to MOTW, SmartScreen, and BAFS.

The files downloaded from Chrome do not contain MOTW information, so there is no need to remove "unblock".

Instead of saying this, please check it. Download any EXE file via Google Chrome and use the right-click Explorer (file Explorer and not IE) context menu. You should see something like that:

1646735829816.png


If the file has not got MOTW (or MOTW was removed) then it will look differently:

1646736060534.png
 
Last edited:

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top