Honestly, the things I said in my post can take a long time... The members have a personal life and jobs too, so I think it would be too time consuming. So it might not even be worth the trouble really. But how it's being done now is fine anyway, it doesn't need to be top grade A
Static detection is any detection which occurred when the sample was not executing in memory; this can include the general signature-based checksum scanning (e.g. MD5/SHA-1/SHA-256 hash detection), byte/hex detection, and even a scoring system (e.g. scanning the Import Address Table for the imported libraries and functions and increasing the score for suspicious imports/a lot of NTAPI imports, checking the PE File Header for suspicious content (e.g. comparison between the virtual size and size on disk to help identify packing, the imports can also be used for identification of packing sometimes), checking the Strings within the PE, checking for digital signature, checking registration details (e.g. company name, copyright, etc), etc).
The dynamic detection is any detection which occurred when the sample was executing in memory; this can include the dynamic heuristics (e.g. this may analyse the sample's behavior for a short period at the start of execution to identify specific patterns commonly shown in malicious software, such as adding to the start-up or dropping a file to the temp folder quickly), behavior blocker/host intrusion prevention system (this may not necessarily count as a whole "detection" because it is only requesting a response from the user based on it's behavior, as opposed to auto-blocking the behavior and quarantining the sample as a confirmed threat - but it is still useful to test it's effectiveness and see how it performs).
Therefore, detection when the real-time protection is active for the file-system still counts as static detection, it only becomes dynamic when the sample is detected due to it's execution (e.g. behavioral patterns, unless the sample unpacks itself and then becomes re-scanned during the memory scanner's work, and then the sample becomes detected with static techniques, but since the sample is executing it'll still be counted as a dynamic detection). It shouldn't matter if the scan is done during a context menu scan or if you just copy the samples to another folder, if the real-time is working for write activity then it should detect the same as it would in the normal scanner (statically).
I do not think that VirusTotal should be used for this because VirusTotal is not completely reliable and accurate; the engines submitted to VirusTotal by the vendors is not always going to behave the same as the engine implemented into the Home/Business product versions they are providing (e.g. it may be either more or less aggressive, therefore causing more/less false positive detection's, etc), and VirusTotal themselves have said this in the past:
Therefore, VirusTotal shouldn't be relied on for these tests at all, otherwise it wouldn't be as reliable as it can be. The best thing the malware testers can do is either perform manual analysis in a Virtual Machine (and even in this situation, they can identify attempts from the sample to identify a virtual environment, even checking the Strings output may give a lead to such activity and if not then the disassembly will), or they can run the sample through a sandbox like Cuckoo on Linux (or use an automated online analysis service like reverse.it, malwr.com) and review the submission results to understand how the sample works; they can use the analysis results from either manual or automated to understand how the sample works which will be beneficial for the dynamic testing of the AV product.
That being said, if the malware testers know how the sample works then they can label it a real threat name themselves without having to rely on AV companies (since if they submitted before testing then it would be pointless as it'd affect the detection results) which means the statistics can be improved... They can then categorize the detection results to show if the tested AV product shown signs of being better at detecting specific threat types/variants then others; e.g. AVG may detect boot-kits better than detection of a key logger, you never know.
As well as this, if you know what the real threat type is and what behavior the sample will attempt to execute on the system, then you can dynamically test the AV product better as well... For example, test the BB/HIPS features correctly. If there is no BB/HIPS alert, you'll know why, and if the sample does detect the virtual environment then you'll know that it wasn't the BB/HIPS not working correctly, but that the sample was refusing to execute the malicious activity.
Anyway, I don't know about you guys but I really love the idea of this and I'm going to follow this... Much better than AV-C in my opinion, and I don't really trust the other testing companies anyway - although I do trust the results here on MalwareTips because I know the members here wouldn't cheat them or accept a bribe!
It is all true. The best way would be the manual analysis. But this can be time-consuming, so I proposed to adopt VT. After one month delay, most tested files are correctly recognized by some good/informative antivirus programs, as for example: clean file, adware, risk-tool, not-virus, etc.
I agree that in some cases VT will not give the definitive answer, and then manual analysis will help. We must only find the guy who can do it, and will want to do it.
The methodology should be the same (or at least very similar) between all testers.
To make this report clear, it must be mentioned that the report is a dynamic test not limited in a specific moment in time.
A statement should be made that WD was tested alone as a single scanner (which is the way it was tested obviously) without the other native security features of windows (UAC & Smartscreen) to back it up. so its result will obviously be less good than other products.