MalwareTips Cumulative Report - November 2016

tim one

Level 21
Verified
Honorary Member
Top Poster
Malware Hunter
Jul 31, 2014
1,086
that's the advantage of SD. The disadvantage is you can't do a reboot with the malware still present. I looks to me like some of our testers reboot and then run a scan, although you would know better than me.
Another disadvantage: the sensitive data, if present, virtualized, but available to be stolen by stealers malware.
It is necessary to encrypt or hide this data.
 

novocaine

Level 5
Verified
Well-known
Aug 19, 2016
200
goodjobwelldone @BoraMurdar ! ;) this is it, I prefer this reference rather than those '100%' labs :p:p:p:p

and about malware hub, I hope it's expanded for the other protection area like phishing/malicous site protection, etc not just malware :D

someday AV vendors should put MT banner with percentage on their website and MT would be 'your security advisor' as it is

almost forget to salute @Der.Reisende , @harlan4096 , @Daniel Hidalgo , @silversurfer , @Av Gurus , @omidomi , @Lord Ami , @Brodyaga , @Solarquest , @koko for your time and hard work, you guys awesome!
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
The procedure can be improved, but it will be much work for the reporter, I think.

1. Final report should contain 1 month tests, and it would be published after two months.
2. Every tester should explicitly mention which files could bypass his/her security.
3. All files from point 2 should be compared with Virustotal detections (minimum one month old).
4. Analysis of Virus total detections should be based on the preestablished list of good/informative antivirus scanners (Kaspersky, Eset, Emsisoft, Bitdefender, ...).
5. From this analysis reporter can establish for each tested software the numbers of false positives and infections.
6. The numbers from point 5 will be only relative, because some false positives may be blocked by all tested software.

The above procedure should avoid (in a great deal) the 'virtual machine and false positives' problems.
I can help with analysis, if the help will be needed.:)
 

Andy Ful

From Hard_Configurator Tools
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,040
It would be also interesting to test one antivirus, pararelly in a virtual machine and in Shadow Defender to see how it can influence the test results.
The results of the advanced procedure from the previous post can be then compared with standard monthly procedure to see if the differences are meaningful.
 
Last edited:

Venhiem

Level 3
Verified
Mar 10, 2014
140
Sorry for the noob question - what is the difference between static and dynamic detections?

And thank you very much for this, very informative!

Edit: Okay scratch that, reading around got me the answer :D

I have another question though, why wasn't Emsisoft tested?
 

TheMalwareMaster

Level 21
Verified
Honorary Member
Top Poster
Well-known
Jan 4, 2016
1,022
Sorry for the noob question - what is the difference between static and dynamic detections?

And thank you very much for this, very informative!

Edit: Okay scratch that, reading around got me the answer :D

I have another question though, why wasn't Emsisoft tested?
Great job guys :) I'm sorry I have no longer time to test with you :( . Basically, static means context menu scan, with the product's realtime protection disabled. It basically includes only local signatures. After doing the static test, the product realtime protection is enabled again and the samples are run. That usually tests the cloud + HIPS or behavioural blocler
 

cruelsister

Level 42
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,133
What an Outstanding idea! But about VM aware malware- concentrating on these may really be over and above what can be done in such tests. A disclaimer in the Dynamic portion that such malware are not to be considered would save much grief, and as they make up a relatively minor portion of all malware shouldn't skew the results.
 

OokamiCreed

Level 18
Verified
Honorary Member
Top Poster
Well-known
May 8, 2015
881
Far more accurate than AV test lab websites. Lies and misinformation are just as bad as malware and I feel AV test labs have quite a bit of inaccuracies. Companies can be swayed too easily and I don't really know for sure what they consider malware/adware so I, for the most part, ignore them and naturally look to my own results (haven't done any in a long while) and those that show details instead of only statics (like our video testers/AV testers in Malware Hub and so on).

It's a must to be as honest as you can be without bias in this field and MT is a place to do that for obvious reasons. Never thought about alternative ways to show results of several vendors apart from just looking at multiple test/videos - instead of viewing results from companies with those people who you do not know or can't even speak to in an informal manner. Even without super detailed looks into the tests conducted here, AV testers have proven themselves to be unbiased, intelligent, and transparent so the results are taken far more seriously.

Really looking forward to more test results and how they will evolve. On a side note, I love how there is a PDF available on cumulative and individual reports. Layout is clean and professional looking.
 

cruelsister

Level 42
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Apr 13, 2013
3,133
Regarding SD and the anti-VM aspects of some malware, this would also have the same issue. Remember that Shadow Mode is also a virtualized environment and can be detected- the possibility of detection dependent on what environmental aspects the malware will query.

Consistency will be the overriding issue, so as long as each AV is run withing the same virtual environment the results will be valid with respect to the malware used.
 

tim one

Level 21
Verified
Honorary Member
Top Poster
Malware Hunter
Jul 31, 2014
1,086
Regarding SD and the anti-VM aspects of some malware, this would also have the same issue. Remember that Shadow Mode is also a virtualized environment and can be detected- the possibility of detection dependent on what environmental aspects the malware will query.
Right, but statistically, most of the malwares that have anti-virtualization methods are directed to the virtual machine, according to my experience.
 

silversurfer

Level 85
Verified
Honorary Member
Top Poster
Content Creator
Malware Hunter
Well-known
Aug 17, 2014
10,048
The testing procedure can be improved for sure but it is possible for the testers because of time consuming. Everyone here can try testing a product of his choice for only one month after that you may have a different opinion about more accurate results.
 

novocaine

Level 5
Verified
Well-known
Aug 19, 2016
200
in addition, there is difference on 2nd opinion used, I think this made difference on the result and I agree with @Andy Ful the result must be compared with VT, so having like Process Explorer - Sysinternals as 2nd opinion is a must, from the previous tests on malware hub, PE revealed infection many times when the other 2nd opinion like ZAM, HP, MBAM failed,
 

DardiM

Level 26
Verified
Honorary Member
Top Poster
Malware Hunter
Well-known
May 14, 2016
1,597
Great job guys :) I'm sorry I have no longer time to test with you :( . Basically, static means context menu scan, with the product's realtime protection disabled. It basically includes only local signatures. After doing the static test, the product realtime protection is enabled again and the samples are run. That usually tests the cloud + HIPS or behavioural blocler
Also static heuristics (big reason explaining the obfuscation of some samples), signatures on the cloud, etc (as far as the sample is not run) :)
 
Last edited:
W

Wave

static means context menu scan, with the product's realtime protection disabled
Static detection is any detection which occurred when the sample was not executing in memory; this can include the general signature-based checksum scanning (e.g. MD5/SHA-1/SHA-256 hash detection), byte/hex detection, and even a scoring system (e.g. scanning the Import Address Table for the imported libraries and functions and increasing the score for suspicious imports/a lot of NTAPI imports, checking the PE File Header for suspicious content (e.g. comparison between the virtual size and size on disk to help identify packing, the imports can also be used for identification of packing sometimes), checking the Strings within the PE, checking for digital signature, checking registration details (e.g. company name, copyright, etc), etc).

The dynamic detection is any detection which occurred when the sample was executing in memory; this can include the dynamic heuristics (e.g. this may analyse the sample's behavior for a short period at the start of execution to identify specific patterns commonly shown in malicious software, such as adding to the start-up or dropping a file to the temp folder quickly), behavior blocker/host intrusion prevention system (this may not necessarily count as a whole "detection" because it is only requesting a response from the user based on it's behavior, as opposed to auto-blocking the behavior and quarantining the sample as a confirmed threat - but it is still useful to test it's effectiveness and see how it performs).

Therefore, detection when the real-time protection is active for the file-system still counts as static detection, it only becomes dynamic when the sample is detected due to it's execution (e.g. behavioral patterns, unless the sample unpacks itself and then becomes re-scanned during the memory scanner's work, and then the sample becomes detected with static techniques, but since the sample is executing it'll still be counted as a dynamic detection). It shouldn't matter if the scan is done during a context menu scan or if you just copy the samples to another folder, if the real-time is working for write activity then it should detect the same as it would in the normal scanner (statically).
------------------------------------------------------

in addition, there is difference on 2nd opinion used, I think this made difference on the result and I agree with @Andy Ful the result must be compared with VT, so having like Process Explorer - Sysinternals as 2nd opinion is a must, from the previous tests on malware hub, PE revealed infection many times when the other 2nd opinion like ZAM, HP, MBAM failed,
I do not think that VirusTotal should be used for this because VirusTotal is not completely reliable and accurate; the engines submitted to VirusTotal by the vendors is not always going to behave the same as the engine implemented into the Home/Business product versions they are providing (e.g. it may be either more or less aggressive, therefore causing more/less false positive detection's, etc), and VirusTotal themselves have said this in the past:

At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being:
  • VirusTotal's antivirus engines are commandline versions, so depending on the product, they will not behave exactly the same as the desktop versions: for instance, desktop solutions may use techniques based on behavioural analysis and count with personal firewalls that may decrease entry points and mitigate propagation, etc.
  • In VirusTotal desktop-oriented solutions coexist with perimeter-oriented solutions; heuristics in this latter group may be more aggressive and paranoid, since the impact of false positives is less visible in the perimeter. It is simply not fair to compare both groups.
  • Some of the solutions included in VirusTotal are parametrized (in coherence with the developer company's desire) with a different heuristic/agressiveness level than the official end-user default configuration.
These are just three examples illustrating why using VirusTotal for antivirus testing is a bad idea, you can read more about VirusTotal and antivirus comparativesin our blog.

Source: FAQ - VirusTotal
------------------------------------------------------

Therefore, VirusTotal shouldn't be relied on for these tests at all, otherwise it wouldn't be as reliable as it can be. The best thing the malware testers can do is either perform manual analysis in a Virtual Machine (and even in this situation, they can identify attempts from the sample to identify a virtual environment, even checking the Strings output may give a lead to such activity and if not then the disassembly will), or they can run the sample through a sandbox like Cuckoo on Linux (or use an automated online analysis service like reverse.it, malwr.com) and review the submission results to understand how the sample works; they can use the analysis results from either manual or automated to understand how the sample works which will be beneficial for the dynamic testing of the AV product.

That being said, if the malware testers know how the sample works then they can label it a real threat name themselves without having to rely on AV companies (since if they submitted before testing then it would be pointless as it'd affect the detection results) which means the statistics can be improved... They can then categorize the detection results to show if the tested AV product shown signs of being better at detecting specific threat types/variants then others; e.g. AVG may detect boot-kits better than detection of a key logger, you never know.

As well as this, if you know what the real threat type is and what behavior the sample will attempt to execute on the system, then you can dynamically test the AV product better as well... For example, test the BB/HIPS features correctly. If there is no BB/HIPS alert, you'll know why, and if the sample does detect the virtual environment then you'll know that it wasn't the BB/HIPS not working correctly, but that the sample was refusing to execute the malicious activity.
------------------------------------------------------

Anyway, I don't know about you guys but I really love the idea of this and I'm going to follow this... Much better than AV-C in my opinion, and I don't really trust the other testing companies anyway - although I do trust the results here on MalwareTips because I know the members here wouldn't cheat them or accept a bribe! :rolleyes:

Great work :)

Hope this helped and provided some insight? :)
 

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top