App Review Malware Obfuscation Part 1

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
cruelsister
Apart from questions about the delivery method, this kind of malware can be a challenge for popular AVs/EDRs in Enterprises.
If I correctly understand the idea of this thread, it is related to code obfuscation. The presented malware is a good example. The obfuscation is done here on two different levels:
  1. Obfuscation of the LUA script.
  2. Obfuscation related to using a non-standard compiler.
Malware is predominantly written in C/C++ and is compiled with Microsoft’s compiler. However, trying to answer RQ1 with our experiments, our work practically shows that by shifting the codebase to another, less used programming language or compiler, malware authors can significantly decrease the detection rate of their binaries but simultaneously increase the reverse engineering effort of the malware analysts.
While shifting to another programming language may seem complicated, especially when considering less popular ones, large language models (LLMs) may come to the rescue; after all, they have proven their capacity in generating code quite accurately [35, 22, 23, 41, 16] and various cybersecurity tasks [12, 37], and malicious actors are abusing them. As a result, they can translate code from one programming language to another, requiring little fine-tuning. This way, malware authors can seamlessly develop loaders, droppers, and other components in languages they may not be familiar with.
 
it is related to code obfuscation
Does obfuscation make detection equally difficult for both blacklisting (AV) and Whilelisting (App control) solutions, or just the AV?
As far as I know, App control does not allow what it does not know, contrary to AV which does not allow what it knows.
 
Does obfuscation make detection equally difficult for both blacklisting (AV) and Whilelisting (App control) solutions, or just the AV?
As far as I know, App control does not allow what it does not know, contrary to AV which does not allow what it knows.

The main problem is for AVs/EDRs that try to detect malware, and not only block it via reputation, prevalence, code integrity, etc.
As we could see in the video, the AV detected the loader too late (malware managed to obtain persistence via a scheduled task).
Furthermore, the analysis of the loader by detection modules was incomplete, so MD could not fully heal the system. It killed the loader but missed the final payload used via the scheduled task. If the same malware were coded/compiled in C/C++ , the AV could have much greater chances to remove the scheduled task or the final payload.
Of course, when LUA will be used frequently, the machine learning modules will significantly improve. But I am afraid that most attackers can skip LUA earlier for another interpreter.
 
Last edited:
the AV detected the loader too late
The exact reason for asking about the role of app control.

The obfuscation has made detection by AV a difficult task (detected but late).
Could obfuscation make the job of app control easier? The more the file is unknown to app control, the more likely to be blocked.

And this is the reason for asking if app control (WDAC) plus SRP were used through WHHL or not, as the video domonstrated only setting the ASR rules by configure defender.
 
And this is the reason for asking if app control (WDAC) plus SRP were used through WHHL or not, as the video domonstrated only setting the ASR rules by configure defender.

I doubt it. A more probable scenario was WHHLight in default settings without using RunBySmartScreen. The initial shortcut was whitelisted by default in this location. The files were not originating from the Internet (no MOtW), so the LUA loader was not blocked by SmartScreen.

If the files were originating from a flash drive and WHHLight was in default settings, the user should use RunBySmartScreen, and the shortcut would be blocked:

1769379487683.png


In another possible scenario, the shortcut could be opened directly from a flash drive, but then the loader could also be blocked by the ASR rule related to running files from USB (included in ConfigureDefender HIGH settings).

Finally, there is a special setting in SWH that blocks shortcuts + executables on the Desktop, and the user can whitelist only trusted shortcuts.
 
Last edited:
I doubt it. A more probable scenario was WHHLight in default settings without using RunBySmartScreen. The initial shortcut was whitelisted by default in this location. The files were not originating from the Internet (no MOtW), so the LUA loader was not blocked by SmartScreen.

If the files were originating from a flash drive and WHHLight was in default settings, the user should use RunBySmartScreen, and the shortcut would be blocked:

View attachment 294988

In another possible scenario, the shortcut could be opened directly from a flash drive, but then the loader could also be blocked by the ASR rule related to running files from USB (included in ConfigureDefender HIGH settings).

Finally, there is a special setting in SWH that blocks shortcuts + executables on the Desktop, and the user can whitelist only trusted shortcuts.
Unpopular opinion. Disabling AutoPlay is an underrated security win. It allows for file scanning before execution and works perfectly with WHHL. This effectively renders null and void the excuse that this specific entry vector is 'necessary' for testing desktop-executed malware. If we can close the door, we don't need to justify leaving it open for the sake of a simulation.
 
I am not sure what we are discovering and re-descovering yet again, obfuscation is used in 999 samples out of a thousand
and really not sure what’s going on here with the 2 pages generated already.

Obfuscation does NOT completely render antivirus detection useless.
It may or may not affect static analysis (depending on whether or not there are imports on C++, or certain features on scripting languages like LUA and so on).

But obfuscation does not affect dynamic analysis systems, to fool them, you’d need a bit more than just a mediocre .net compiler with confuserX or a LUA script.

LUA doesn’t even come pre-installed (just like python), users will have to download the LUA runtime and install it manually and I guarantee 7/10 don’t even know what LUA is.
As Java, AutoIT, JPHP and all uncommon languages, some detections may fail, whilst others (such as the cloud analysis when configured correctly) would upload the script for analysis—these are picking up where the others fail.

The antivirus software that detects by reputation is also not affected by obfuscation, in fact, the more frequently you mutate the code, the more likely you are to get a block.
 
Unpopular opinion. Disabling AutoPlay is an underrated security win.
The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.
I mentioned the flash drive scenario because it is closely related to file delivery to the Desktop with no MOtW.
 
Obfuscation does NOT completely render antivirus detection useless.
It may or may not affect static analysis (depending on whether or not there are imports on C++, or certain features on scripting languages like LUA and so on).

The antivirus software that detects by reputation is also not affected by obfuscation, in fact, the more frequently you mutate the code, the more likely you are to get a block.
@Trident This is some of the hope I was looking for in all of the back and forth :) @Parkinsond just posted the same thought ;) :)
 
The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.
I mentioned the flash drive scenario because it is closely related to file delivery to the Desktop with no MOtW.
Microsoft killed AutoPlay a decade ago btw.
 
As Java, AutoIT, JPHP and all uncommon languages, some detections may fail,

I think that @cruelsister wants to show what you have just said. Her next video will probably show that Comodo can easily contain such threats, which is true.

whilst others (such as the cloud analysis when configured correctly) would upload the script for analysis—these are picking up where the others fail.

Even if the script is uploaded, the automated analysis is less effective than for AMSI-supported scripts. The LUA-based attacks are also much rarer, so the machine learning is less effective. Many AVs detect such rare threats by simple signatures.
 
I think that @cruelsister wants to show what you have just said.



Even if the script is uploaded, the automated analysis is less effective than for AMSI-supported scripts. The LUA-based attacks are also much rarer, so the machine learning is less effective. Many AVs detect such rare threats by simple signatures.
This varies from solution to solution.

The final payload, if delivered on the virtual environment, will cause the initial vector (LUA script or whatever it is) to also be blocked.

The cloud analysis is usually based on behaviour, not based on signatures. And the models are not what runs on the client devices, they are heavy models trained just for dynamic analysis.

However, LUA is kinda novel so the sandbox may not even have it installed.
Lua allows code to be exported as standard Windows executable as well.
 
Last edited:
The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.
I mentioned the flash drive scenario because it is closely related to file delivery to the Desktop with no MOtW.
I raised this point because desktop malware testing is often justified by the 'flash drive execution' scenario. However, if AutoPlay is disabled, that justification becomes irrelevant, as the primary infection vector is effectively neutralized.
 
This varies from solution to solution.

The final payload, if delivered on the virtual environment, ...

Here is the problem. In most cases, only the loader will be uploaded.

However, LUA is kinda novel so the sandbox may not even have it installed.

In the attack, LUA does not have to be installed, as it is shown in the example I posted:

Even if the LUA is installed in the sandbox, the uploaded file will often be only the custom LUA loader. The loader will not load anything in the sandbox.
However, MD somehow could detect it after extended analysis. But the problem was with identifying the later infection stages.
 
Here is the problem. In most cases, only the loader will be uploaded.



In the attack, LUA does not have to be installed, as it is shown in the example I posted:

Even if the LUA is installed in the sandbox, the uploaded file will often be only the custom LUA loader. The loader will not load anything in the sandbox.
However, MD somehow could detect it after extended analysis. But the problem was with identifying the later infection stages.
Well in this case you are delivering the JIT.

There are many ways it can be done that can go unnoticed, some of them can be tackled by some ASR rules.

An attacker will have to use social engineering in real life, so the best way for them to deliver that would be to either write a small bootstrapper in a very common language that downloads/drops the rest, or, maybe a script can download and execute them.

Either way, it will be effective in targeted attacks against systems that are not properly hardened and solutions that do not use proper emulation or aggressive static analysis.

Other vendors will be catching up with definitions.

Blocking this at runtime, through behavioural analysis on the user machine won’t be a straightforward task.