Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Security
Malware Analysis
The peculiarity of EXE malware testing.
Message
<blockquote data-quote="Andy Ful" data-source="post: 998197" data-attributes="member: 32260"><p>Interesting article about the delivery of ransomware:</p><p>[URL unfurl="true"]https://www.infosecurity-magazine.com/news/87-ransomware-brands-exploit-macros/[/URL]</p><p></p><p>From this article, it follows that about 87% of ransomware was delivered via weaponized documents with macros.</p><p>This is a striking example that the results of tests with only EXE files are unreliable in the context of real-life protection against threats. In such tests, we are skipping the possible protection against non-EXE samples, that can prevent the delivery/execution of EXE payloads.</p><p></p><p>When we have two AVs (AV1 and AV2) which scored 100% and 90% in the EXE test, such a result does not mean that the AV1 protected the users better than the second, <strong><span style="color: rgb(184, 49, 47)">and we even cannot exclude that the AV2 protected users in the wild better than AV1.</span></strong></p><p><strong><span style="color: rgb(184, 49, 47)">The EXE tests could be reliable only under the assumption that tested AVs have got the same protection against non-EXE samples (which is obviously untrue).</span></strong></p><p><strong></strong></p><p><strong>It is very probable that despite very different scorings in the EXE tests, the real-life protection of most AVs is very similar.</strong></p><p><strong></strong></p><p><strong>Edit.</strong></p><p>The 100% protection scoring in the test with a few days old EXE samples usually means that some samples could compromise the AV1 protection in the wild as 0-day EXE malware and after that, the AV1 <span style="color: rgb(41, 105, 176)"><strong>has already got signatures for them in the test</strong></span>. So we can have (for example) 90% protection in the wild and 100% protection in the test.</p><p></p><p>The 90% protection scoring in the test with a few days old EXE samples is sometimes possible even when the AV2 blocked the attacks in the wild, but these attacks were stopped before the EXE samples could be delivered to the machine. A good example is when the ransomware attack starts with a weaponized document and AV2 blocks the macro. The never-seen-before EXE sample was not delivered, so <span style="color: rgb(41, 105, 176)"><strong>there is no signature for it in the AV2 signature base at the test time</strong></span>. Finally, we can have (for example) 100% protection in the wild and 90% protection in the EXE test.</p></blockquote><p></p>
[QUOTE="Andy Ful, post: 998197, member: 32260"] Interesting article about the delivery of ransomware: [URL unfurl="true"]https://www.infosecurity-magazine.com/news/87-ransomware-brands-exploit-macros/[/URL] From this article, it follows that about 87% of ransomware was delivered via weaponized documents with macros. This is a striking example that the results of tests with only EXE files are unreliable in the context of real-life protection against threats. In such tests, we are skipping the possible protection against non-EXE samples, that can prevent the delivery/execution of EXE payloads. When we have two AVs (AV1 and AV2) which scored 100% and 90% in the EXE test, such a result does not mean that the AV1 protected the users better than the second, [B][COLOR=rgb(184, 49, 47)]and we even cannot exclude that the AV2 protected users in the wild better than AV1. The EXE tests could be reliable only under the assumption that tested AVs have got the same protection against non-EXE samples (which is obviously untrue).[/COLOR] It is very probable that despite very different scorings in the EXE tests, the real-life protection of most AVs is very similar. Edit.[/B] The 100% protection scoring in the test with a few days old EXE samples usually means that some samples could compromise the AV1 protection in the wild as 0-day EXE malware and after that, the AV1 [COLOR=rgb(41, 105, 176)][B]has already got signatures for them in the test[/B][/COLOR]. So we can have (for example) 90% protection in the wild and 100% protection in the test. The 90% protection scoring in the test with a few days old EXE samples is sometimes possible even when the AV2 blocked the attacks in the wild, but these attacks were stopped before the EXE samples could be delivered to the machine. A good example is when the ransomware attack starts with a weaponized document and AV2 blocks the macro. The never-seen-before EXE sample was not delivered, so [COLOR=rgb(41, 105, 176)][B]there is no signature for it in the AV2 signature base at the test time[/B][/COLOR]. Finally, we can have (for example) 100% protection in the wild and 90% protection in the EXE test. [/QUOTE]
Insert quotes…
Verification
Post reply
Top