Advice Request Any real-time software that uses non-traditional ways to find malware?

Please provide comments and solutions that are helpful to the author of this topic.

vaccineboy

Level 3
Verified
Well-known
Sep 5, 2018
141



Need somebody else to test VirusCope's abilities. Had no time to run hundreds of files one by one.
Hi, do you mind explaining the result in a few words? Pardon me, I'm non-technical. Thanks.
 

Nagisa

Level 7
Thread author
Verified
Jul 19, 2018
342
There are three folders and are specific number of files in them (Clean: 2202, PUP: 1065, Malicious: 1050). I made a static scan of every one of them and those are the scan results. Don't take this as a test because it doesn't show the proactive detection capability.
 
F

ForgottenSeer 89360

Hmmm, very interesting. What obfuscator do you recommend? I would love to play around with this. My initial thought is that is that it would not produce a diverse enough (or even realistic / effective) training data set, but I would like to play around with it out of curiosity.

Scripts and fileless malware in general are odd compared to PE32. I have read from various sources that ROUGHLY 33% of all PE32 files are malicious.


Whereas both malious and Safe scripts are far less common, and the ratio of malicious scripts to Safe scripts that an end user will encounter is likely higher. In other words, I believe the best practice is to auto allow the obviously safe scripts (high file rep, spawned from a Safe process, etc.), and block the rest, especially if they are located in a common malware hiding spot.
They won’t be located in common malware hiding spot, because they are the first part of the attack chain. User would have downloaded them either on the desktop, or in %userprofile%\downloads.
Simply using obfuscation won’t produce results diverse enough, but if you combine it with imagination, results can be very different.
Invoke-Obfuscation Master is one PowerShell obfuscator I like, but it’s not difficult to obfuscate code even without it.
You just have to know that:
PowerShell supports various types of encoding/compression/concatenation
and
PowerShell treats every line as a new command, unless there is a calculation.
Attackers normally execute long PowerShell code by adding a calculation at the end/beginning of new line.

Treating scripts spawned from a trusted process as “safe” is the biggest mistake that could be made. This is what attackers usually rely on. In my opinion they’ve started using scripts only to bypass reputation checks. Same is with Java malware.
You have a process already whitelisted (wscript, cscript, cmd, powershell, javaw), but their behaviour is variable.
 
Last edited by a moderator:

cruelsister

Level 43
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,224
First off, my compliments to Dan for providing the samples. It must have been a pain to do, and I think I speak for all here that he has both our Thanks and continuing Respect!

That being said, although in general running malware against a favorite product is always quite a Hoot, it is nice to be aware of the nature of the malware being run as things like Mechanism of Action and Age of samples are important to consider in order to reasonably evaluate the effectiveness of a given product in any testing scenario.

So for those with neither the time nor inclination to actually run and analyze the samples provided, it may be enlightening to take a random sample of the files in the Malware section, send them to VT and note the initial submission date (hint).

m
 
F

ForgottenSeer 89360

First off, my compliments to Dan for providing the samples. It must have been a pain to do, and I think I speak for all here that he has both our Thanks and continuing Respect!

That being said, although in general running malware against a favorite product is always quite a Hoot, it is nice to be aware of the nature of the malware being run as things like Mechanism of Action and Age of samples are important to consider in order to reasonably evaluate the effectiveness of a given product in any testing scenario.

So for those with neither the time nor inclination to actually run and analyze the samples provided, it may be enlightening to take a random sample of the files in the Malware section, send them to VT and note the initial submission date (hint).

m
I ran everything remaining after AVG scan (Folder: “Malware”), monitoring what’s going on.
One sample was deleted by AVG IDP (behavioural blocker).

Many others were perfectly harmful cracks and patches, that didn’t drop any files, didn’t register autoruns and system remained clean.
VT had very few vendors classifying them as PUP/PUA

One of the samples was a game, which I’ve highlighted (VT 1/72) on a 6-month old file.

Rest was just corrupted and looking for files that don’t exist (which is a great sign it’s not actually an infection)

In the clean folder, I immediately noticed files that are far cry from “clean”. These files are
1. Very small in size . No developer can bring you a safe and beneficial program in 50 kb of size (even if they used OOP and recycled code with extreme effectiveness). Even cracks/patches/keygens are larger than that. Small size suggests high-compression, which is usually used to evade detection. It also shows lack of resources/UX, which is again, sign of malware.
2. Icon is a representation of the software and helps users to remember, and identify the program. No developer would use low-size, low-quality weird icons. If they do, I wouldn’t execute their software, even if it was a free rival of Adobe Photoshop.
3. Metadata wasn’t really convincing.
I submitted these files to VT and they were all malware, as my intuition suggested.
 
Last edited by a moderator:

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
Treating scripts spawned from a trusted process as “safe” is the biggest mistake that could be made. This is what attackers usually rely on. In my opinion they’ve started using scripts only to bypass reputation checks. Same is with Java malware.
You have a process already whitelisted (wscript, cscript, cmd, powershell, javaw), but their behaviour is variable.
None of these vulnerable processes should be whitelisted or even blocked globally, it should depend individually on the attack chain.

First off, my compliments to Dan for providing the samples. It must have been a pain to do, and I think I speak for all here that he has both our Thanks and continuing Respect!

That being said, although in general running malware against a favorite product is always quite a Hoot, it is nice to be aware of the nature of the malware being run as things like Mechanism of Action and Age of samples are important to consider in order to reasonably evaluate the effectiveness of a given product in any testing scenario.

So for those with neither the time nor inclination to actually run and analyze the samples provided, it may be enlightening to take a random sample of the files in the Malware section, send them to VT and note the initial submission date (hint).

m
Thank you CS, but I cannot take credit for creating these samples, they were created by another vendor to test static ML/Ai malware detection, and they are 3+ years old.


But one of the key capabilities of ML/Ai malware detection that next-gen vendors always tout is its ability to detect old and new samples alike, which is one reason this test is so interesting. What happens when a malcoder decides to use a 5-10 year old IDE to build their malware? Then you have “old” new malware that has a better chance of infecting. This does actually happen, probably more often than people realize.

You like to test scripts because that is your specialty and you are very good at it (the best ;)), and it serves as a great understanding as to a product’s efficacy and capabilities. But I think it is also fair to test static ML/Ai efficacy on old and new samples. It is much better to detect malware before it executes instead of relying completely on behavior analysis.

Edit: Just thought of this example... you would expect modern security software to block scripts that you wrote 3-5 years ago, right? ;)
 
Last edited:

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
I ran everything remaining after AVG scan (Folder: “Malware”), monitoring what’s going on.
One sample was deleted by AVG IDP (behavioural blocker).

Many others were perfectly harmful cracks and patches, that didn’t drop any files, didn’t register autoruns and system remained clean.
VT had very few vendors classifying them as PUP/PUA

One of the samples was a game, which I’ve highlighted (VT 1/72) on a 6-month old file.

Rest was just corrupted and looking for files that don’t exist (which is a great sign it’s not actually an infection)

In the clean folder, I immediately noticed files that are far cry from “clean”. These files are
1. Very small in size . No developer can bring you a safe and beneficial program in 50 kb of size (even if they used OOP and recycled code with extreme effectiveness). Even cracks/patches/keygens are larger than that. Small size suggests high-compression, which is usually used to evade detection. It also shows lack of resources/UX, which is again, sign of malware.
2. Icon is a representation of the software and helps users to remember, and identify the program. No developer would use low-size, low-quality weird icons. If they do, I wouldn’t execute their software, even if it was a free rival of Adobe Photoshop.
3. Metadata wasn’t really convincing.
I submitted these files to VT and they were all malware, as my intuition suggested.
Please remember, this is a static ML/Ai benchmark and efficacy test. If this was a dynamic ML/Ai benchmark and efficacy test, I would not expect a static test to perform well.
 
F

ForgottenSeer 89360

It is much better to detect malware before it executes instead of relying completely on behavior analysis.
It is best, but it’s not always possible. Behavioural blockers have greater visibility, as opposed to static and dynamic emulation, which I personally as an amateur attacker/tester have managed to bypass. Once you execute something, it reveals its true form and many times it’s not possible to block it prior to this moment.

It’s very important to distinguish the various forms of behavioural blocking:
It can be policy-based, where certain actions are not allowed. For instance Adobe Reader shouldn’t create executables.
This works best against scripts.
Some vendors might call it IDS. It’s a form of automated HIPS.

Static/fixed, also called non-profiled : this is where it monitors for certain characteristics and each one of them increases or decreases the probability of maliciousness. Once a certain score is reached, the infection is “cured”.
This type of behavioural blocking doesn’t classify threats.
Bitdefender Advanced Threat Defence is an example of such blocker.
Fixed type BBs are frequently enhanced by reputation, as they are more prone to FPs.
This works best against spyware-like threats (backdoors, RATs) that have minimal attack chain and stay quiet/stealthy.

Dynamic/profile-based:
This is also somewhat effective against scripts, but is usually used as a last line of defence.
It contains profiles/patterns of threats, together with their attack chain. Once the full chain is observed, remediation is started.
These BBs can be recognised by the fact that they classify threats. It may be just a generic classification, but it’s able to distinguish RAT from Ransomware.
This works on various types of threats.
 
Last edited by a moderator:

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
I pretty much finished testing WD… as you guys know it is very difficult to test. Basically, it was designed to protect the computer, and not designed to test malpaks. I do not have the exact numbers because files were constantly being uploaded to the cloud for analysis, so a lot of times the initial static verdict would be clean, then later the file would be correctly classified as malware. So the initial static results were not all that great, but MS apparently has some pretty amazing malware analysis sandboxes because most files were correctly identified after the dynamic analysis in the cloud. The only issue is how many patient zeros will there be, especially when considering polymorphic malware. Like all security products, WD is a work in progress, and I love the direction they are heading... overall it is an amazing product.

BTW, I discovered a little trick while performing the tests that might help anyone testing WD in the future. Do you guys know how if you scan a lot of files with WD it becomes overwhelmed (because it was designed to protect the computer and not test malpaks)? I noticed that WD was not nearly as overwhelmed when it analyzed the files as they were moved from one folder to another. So I simply moved the files from one folder to another, then used Defender Control 1.6 to turn WD off and delete the infected files along with the WD Protection History, then turned WD back on with Defender Control, then moved the files again until all that remained were “clean” files. Then you can execute whatever is left over to perform dynamic analysis if you wish. If I would have waited for the WD scans, it would have taken many, many days.
 

mazskolnieces

Level 3
Well-known
Jul 25, 2020
117
None of these vulnerable processes should be whitelisted or even blocked globally, it should depend individually on the attack chain.
That's contrary to the best practices of the entire industry. You can't argue that vendors don't disable them because they shouldn't be disabled. Those vendors would like to disable them, but don't so as to prevent a deluge of support requests from consumers. It's a matter of practicality for support operations, and not one of "this stuff should not be disabled."

In your own product, it's been proven that people cannot distinguish between blocked safe and malicious command lines and respond appropriately. When they have to reply to an alert, and make the wrong choice and allow the malicious command line, you are notorious for blaming the end user.

So much for your claim that VS is so easy any novice can figure it out.
 
Last edited:

mazskolnieces

Level 3
Well-known
Jul 25, 2020
117
Do you guys know how if you scan a lot of files with WD it becomes overwhelmed (because it was designed to protect the computer and not test malpaks)?
This issue is limited to less than 0.01 % of all users, so it is an irrelevant criticism to say Windows Defender is dreadfully slow when 99.99 % of all users never face the issue. In fact, some users already have huge sized folders, used only Windows Defender, and never once experience your unsubtantiated claim of Windows Defender scan "would take many, many days."

A 20 minute scan is not a big deal except for the tiny minority that just can't cope with it and those that want to bash Windows Defender for their own gain.

Not only that, folders can be excluded from Windows Defender scans very easily. People do that and yet they don't gripe that Windows Defender is so difficult to figure out and user unfriendly. Relative to your own product, Windows Defender is a lot easier to work with for the average computer user.
 
F

ForgottenSeer 89360

That's contrary to the best practices of the entire industry. You can't argue that vendors don't disable them because they shouldn't be disabled. Those vendors would like to disable them, but don't so as to prevent a deluge of support requests from consumers. It's a matter of practicality for operations, and not one of "this stuff should not be disabled."

In your own product, it's been proven that people cannot distinguish between blocked safe and malicious command lines and respond appropriately. When they have to reply to an alert, and make the wrong choice and allow the malicious command line, you are notorious for blaming the end user.

So much for your claim that VS is so easy any novice can figure it out.
I totally agree.

These processes should be and always are whitelisted. If you wanna block their abuse, there are plenty of ways to do it. Generic blocks and alerts (specially the latter), won’t help a bit.
 

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
That's contrary to the best practices of the entire industry. You can't argue that vendors don't disable them because they shouldn't be disabled. Those vendors would like to disable them, but don't so as to prevent a deluge of support requests from consumers. It's a matter of practicality for support operations, and not one of "this stuff should not be disabled."

In your own product, it's been proven that people cannot distinguish between blocked safe and malicious command lines and respond appropriately. When they have to reply to an alert, and make the wrong choice and allow the malicious command line, you are notorious for blaming the end user.

So much for your claim that VS is so easy any novice can figure it out.
We have discussed this multiple times on multiple threads with your multiple accounts. Where is this proof of best practices? All you have to do is provide a link and you will have proven your point. Since I am such a HUGE fan of evidence, here is some that demonstrates why permanently disabling these items is a VERY bad idea... please read the entire thread.


Correct, vendors do not disable vulnerable processes for many reasons, including support requests. There are probably other reasons that I suggest, but it doesn't matter because you just admitted that they do not disable them.

VS is highly flexible and admins can prevent end user from auto allowing new items, in multiple different ways, including locally and in the web management console. If there is an even better way to handle these, we could easily add an option to accommodate this, and actually, the more I think about it, maybe I will add an option to silently block scripts and fileless malware until it is approved by the admin (which already exists as an option for all items). Either way, at least VS is refined to the point that it does not require VENDOR CO-MANAGEMENT of the web management console 🤣.
 

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
This issue is limited to less than 0.01 % of all users, so it is an irrelevant criticism to say Windows Defender is dreadfully slow when 99.99 % of all users never face the issue. In fact, some users already have huge sized folders, used only Windows Defender, and never once experience your unsubtantiated claim of Windows Defender scan "would take many, many days."

A 20 minute scan is not a big deal except for the tiny minority that just can't cope with it and those that want to bash Windows Defender for their own gain.

Not only that, folders can be excluded from Windows Defender scans very easily. People do that and yet they don't gripe that Windows Defender is so difficult to figure out and user unfriendly. Relative to your own product, Windows Defender is a lot easier to work with for the average computer user.
This is not even worth responding to, but well, you know, Covid "lockdown" and everything...

I was perfectly clear that WD is amazing and that it was designed to protect the computer, not to analyze thousands of files from a malpak. You are trying to twist my words, but luckily users can easily scroll up a little to see what I actually said here:


Test it yourself. Disable WD, then drop 1,000+ malicious files on the desktop, then enable WD, then right click to scan with WD. Then post a video showing that it did not take all that long. Anyone who has actually done this knows exactly what will happen, and knows the scan will take a very long time.

You are not going to get me to bash WD... I think it is a great product that is a work in progress. I understand that you are not a fan of adding features or refining software in any meaningful way, but that is an entirely different story.

Having said that, if VS is so not user-friendly, maybe Andy's next project will be CV. ControlVoodooShield 🥱.
 

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
I totally agree.

These processes should be and always are whitelisted. If you wanna block their abuse, there are plenty of ways to do it. Generic blocks and alerts (specially the latter), won’t help a bit.
I think there is a misunderstanding here. mazskolnieces believes all vulnerable process should be permanently disabled... like ps, regedit, cmd, etc. What he has never understood is that ALL windows files are vulnerable... some more than others. So if you want to narrow down the list of vulnerable processes, you could include script interpreters, regedit, rundll32, vssadmin to name a few. But the problem is that every 3 or so months malcoders find a new windows process to commonly abuse. Instead of leaving the user vulnerable, VS assumes that all windows processes are vulnerable (which was not an easy task) and protects them accordingly. The only question is, how do you handle the prompts?

What do you think is the absolute best way to handle the prompts?
 

WiseVector

From WiseVector
Verified
Top Poster
Developer
Well-known
Dec 14, 2018
643
I tested WiseVector against malware I created myself (obfuscated PowerShell loader) and WV did great.
“Scan for safe files instead” is nothing new and nothing original/next-gen. These are tricks that “old mice”, such as Symantec/Norton, Trend Micro and many others have learned long time ago. These “tricks” are not bad, but you can’t rely solely on them, they have to be combined with other approaches as well.

What’s your opinion on using set of clean files as a training set and then using this both as anomaly detection and as a way to reduce fps?

What techniques WV currently supports to exclude safe files from constant scans and reduce FPs?
Sorry for the late reply.

I think the most important things in Machine Learning are how deeply you can parse a file, the train set you selected, the features you extracted.
Algorithms and ideas are secondary.

Take PE files for example, there are so many compilers(VB, .Net, Delphi, VC), packers(UPX, VMP, ASPACK) and installers(NSIS, SFX, Inno). The ML model accuracy depends on how deeply you parse these files. On the other hand, it is fundamentally impossible for machine learning to avoid FPs. Suppose we have two files. One call UrlDownloadFile to download a file from microsoft and the execute it. The other one will download malware from a malicious website and execute it. The pseudocode is:

File one:
UrlDownloadToFile (hxxps://www.microsoft.com/xx.exe, good.exe)
shellexecute(good.exe)

File two:
UrlDownloadToFile (hxxps://www.xxx.com/xx.exe, good.exe)
shellexecute(good.exe)

As you can see, there are minor differences between the two files. If you can parse the file deeply enough, the AI will eventually realize that file two has a bigger threat level than file one. But if you do that you will find it will have a bigger performance impact. So that's why ML engines often have more FPs than signatures based engines.

We always keep improving the ability to parse a file to reduce FPs. WV is nearly three years old and during this time we have received a number of FP files from users. These files are great for us to reduce FPs. If you can parse a file very well and have a good data set, you can do anything you want. For example, identifying malware by training legit files, or identifying legit files by training malware.

We have come to realize that AI based static scanning has too many limitations. So we spent a lot of time to develop AI based events analysis and AI based memory scanning. Finally malware will perform its malicious behavior or decrypt its payload in memory.
 
Last edited:

danb

From VoodooShield
Verified
Top Poster
Developer
Well-known
May 31, 2017
1,742
Static ML/Ai malware detection might seem to be limited and elusive at times, but apparently it is quite valuable at $1.4B ;).

I forgot to mention... a lot of "next gen" providers supplement their static ML/Ai malware detection with file reputation scanning to reduce false positives. It is pretty cool... you can have slightly aggressive algos then reduce the false positives with file reputation scanning / global whitelisting, then it is kind of the best of both worlds, especially when add behavior blocking and memory scanning post execution.
 
Last edited:

mazskolnieces

Level 3
Well-known
Jul 25, 2020
117
We have discussed this multiple times on multiple threads with your multiple accounts.
I only have a single account.


Where is this proof of best practices? All you have to do is provide a link and you will have proven your point.
There is no burden of prove on me when I know what the industry best practices are. Either you know, based upon real world professional experience or you don't. You have to figure it out.

I told you earlier where you can find best practices. I even provided the courtesy of listing a slew of potential sources. There is no single document that you can go to (which that is exactly what you are demanding and if it isn't provided then you'll say no evidence is provided. You're not fooling anyone. Everybody knows what you're like.)

Here are a few examples. A couple are from an industry-wide security policy clearinghouse. Another is from a large security product vendor. The final one from an even larger security product vendor that is quoting the Australian Signals Directorate.

I already know that you're going to come back and argue semantics about what these security practices say really say. All I'm going to say is that Microsoft offers multiple mechanisms to disable or even remove many things on Windows. If Microsoft did not intend nor want users to disable stuff on Windows, then it would not provide the means to do so. Microsoft's fundamental precept is that if it can be disabled without breaking the OS, then disable it. It even provides lists to its clients that state that things such as PowerShell or Windows Script Host, among others, can be disabled for the best level of security. That has always been the first rule of ASR.

Furthermore the industry itself has many projects, and ultimately researchers have shown that the best security is provide by disabling processes. Millions of end users adhere to this advice full-time. They've literally been doing it for the past 20+ years without the sky falling.

Arguing that things such as Control Panel are vital OS resources is amateur hour on your part. Control Panel is not a vital resource. Never was. Never will be. And switching a boolean OFF to access Control Panel when rundll32 is disabled is so easy a beginner can do it. Not only that, depending upon the implementation, there's ways to access control panel even with rundll32 disabled without disabling the protection.

Capture - Copy (2).PNG

Capture - Copy.PNG

Cap1 - Copy.PNG

Cap2 - Copy.PNG
 
Last edited:

mazskolnieces

Level 3
Well-known
Jul 25, 2020
117
Since I am such a HUGE fan of evidence, here is some that demonstrates why permanently disabling these items is a VERY bad idea... please read the entire thread.

You are not providing any compelling evidence here that suggests it "is a VERY bad idea" to block Windows Script Host full-time.

Intel's task.vbs script was blocked. And it was the OP end user that figured it out for themselves after being sent on a wild goose chase by you and others.

This matter was trivial. And it remains trivial. Intel's QueenCreek task.vbs is not required to run. Blocking it breaks nothing. So the notion that it "is a VERY bad idea...." to block WHS based upon blocking a non-essential, intrusive .vbs file is ludicrous. Millions block WHS without anything more than a minor example. Not to mention that allowing WHS to run exposes abuse of Microsoft's own scripts to do a number of things on either workstation or server.

Other default-allow vendors block Intel task.vbs as well. Here are a few examples:

Norton Blocks Task.vbs

Emsisoft Blocks Task.vbs
 
Last edited:
  • Like
Reactions: roger_m

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top