But, I've always agreed with this, as well as other members who've brought it up.Fair warning, the last time I brought this up, it didn't go well for me. People can get a bit 'pile-on' about it.![]()
But, I've always agreed with this, as well as other members who've brought it up.Fair warning, the last time I brought this up, it didn't go well for me. People can get a bit 'pile-on' about it.![]()
Not directed at you, just a nod to the previous threads where things got a bit crowded with people ganging up.But, I've always agreed with this, as well as other members who've brought it up.
You who created the file (the person)? or how the file landed on PC after being created?Where does the file originate from?
Malware is predominantly written in C/C++ and is compiled with Microsoftās compiler. However, trying to answer RQ1 with our experiments, our work practically shows that by shifting the codebase to another, less used programming language or compiler, malware authors can significantly decrease the detection rate of their binaries but simultaneously increase the reverse engineering effort of the malware analysts.
While shifting to another programming language may seem complicated, especially when considering less popular ones, large language models (LLMs) may come to the rescue; after all, they have proven their capacity in generating code quite accurately [35, 22, 23, 41, 16] and various cybersecurity tasks [12, 37], and malicious actors are abusing them. As a result, they can translate code from one programming language to another, requiring little fine-tuning. This way, malware authors can seamlessly develop loaders, droppers, and other components in languages they may not be familiar with.
Does obfuscation make detection equally difficult for both blacklisting (AV) and Whilelisting (App control) solutions, or just the AV?it is related to code obfuscation
Does obfuscation make detection equally difficult for both blacklisting (AV) and Whilelisting (App control) solutions, or just the AV?
As far as I know, App control does not allow what it does not know, contrary to AV which does not allow what it knows.
The exact reason for asking about the role of app control.the AV detected the loader too late
And this is the reason for asking if app control (WDAC) plus SRP were used through WHHL or not, as the video domonstrated only setting the ASR rules by configure defender.
Unpopular opinion. Disabling AutoPlay is an underrated security win. It allows for file scanning before execution and works perfectly with WHHL. This effectively renders null and void the excuse that this specific entry vector is 'necessary' for testing desktop-executed malware. If we can close the door, we don't need to justify leaving it open for the sake of a simulation.I doubt it. A more probable scenario was WHHLight in default settings without using RunBySmartScreen. The initial shortcut was whitelisted by default in this location. The files were not originating from the Internet (no MOtW), so the LUA loader was not blocked by SmartScreen.
If the files were originating from a flash drive and WHHLight was in default settings, the user should use RunBySmartScreen, and the shortcut would be blocked:
View attachment 294988
In another possible scenario, the shortcut could be opened directly from a flash drive, but then the loader could also be blocked by the ASR rule related to running files from USB (included in ConfigureDefender HIGH settings).
Finally, there is a special setting in SWH that blocks shortcuts + executables on the Desktop, and the user can whitelist only trusted shortcuts.
That is exactly what I was asking about; thank you.The antivirus software that detects by reputation is also not affected by obfuscation, in fact, the more frequently you mutate the code, the more likely you are to get a block.
The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.Unpopular opinion. Disabling AutoPlay is an underrated security win.
@Trident This is some of the hope I was looking for in all of the back and forthObfuscation does NOT completely render antivirus detection useless.
It may or may not affect static analysis (depending on whether or not there are imports on C++, or certain features on scripting languages like LUA and so on).
The antivirus software that detects by reputation is also not affected by obfuscation, in fact, the more frequently you mutate the code, the more likely you are to get a block.
Microsoft killed AutoPlay a decade ago btw.The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.
I mentioned the flash drive scenario because it is closely related to file delivery to the Desktop with no MOtW.
I like to ask.@Trident This is some of the hope I was looking for in all of the back and forth@Parkinsond just posted the same thought
![]()
![]()
As Java, AutoIT, JPHP and all uncommon languages, some detections may fail,
whilst others (such as the cloud analysis when configured correctly) would upload the script for analysisāthese are picking up where the others fail.
This varies from solution to solution.I think that @cruelsister wants to show what you have just said.
Even if the script is uploaded, the automated analysis is less effective than for AMSI-supported scripts. The LUA-based attacks are also much rarer, so the machine learning is less effective. Many AVs detect such rare threats by simple signatures.
I raised this point because desktop malware testing is often justified by the 'flash drive execution' scenario. However, if AutoPlay is disabled, that justification becomes irrelevant, as the primary infection vector is effectively neutralized.The full name of the ASR rule noted by me is: "Block untrusted and unsigned processes that run from USB". It is unrelated to AutoPlay. It is similar to SAC, but only for files originating from the USB and does not use WDAC policy.
I mentioned the flash drive scenario because it is closely related to file delivery to the Desktop with no MOtW.
This varies from solution to solution.
The final payload, if delivered on the virtual environment, ...
However, LUA is kinda novel so the sandbox may not even have it installed.
Well in this case you are delivering the JIT.Here is the problem. In most cases, only the loader will be uploaded.
In the attack, LUA does not have to be installed, as it is shown in the example I posted:
App Review Post in thread 'Malware Obfuscation Part 1'
I think that the attack in the video can be similar to that one (two-year-old example used against student gamer community):
View attachment 1769364025661.png
However, instead of using the batch script, the shortcut was used.
- Lua51.dll ā LuaJIT Runtime interpreter
- Compiler.exe ā a thin compiled Lua loader
- Lua script ā Malicious Lua script
- Launcher.bat ā Batch script used to run Compiler.exe with the malicious script as parameter
If so, then for files originating from the Internet, most such attacks can be blocked by SmartScreen, SAC, or WDAC.
Edit.
SRP, WDAC...
Even if the LUA is installed in the sandbox, the uploaded file will often be only the custom LUA loader. The loader will not load anything in the sandbox.
However, MD somehow could detect it after extended analysis. But the problem was with identifying the later infection stages.