Forums
New posts
Search forums
News
Security News
Technology News
Giveaways
Giveaways, Promotions and Contests
Discounts & Deals
Reviews
Users Reviews
Video Reviews
Support
Windows Malware Removal Help & Support
Inactive Support Threads
Mac Malware Removal Help & Support
Mobile Malware Removal Help & Support
Blog
Log in
Register
What's new
Search
Search titles only
By:
Search titles only
By:
Reply to thread
Menu
Install the app
Install
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Community
Community Feedback
Suggestions for Malware Vault testers
Message
<blockquote data-quote="Parsh" data-source="post: 660750" data-attributes="member: 58090"><p>That's what I've been trying to convey. We cannot be sure just by observing that the product was able to perform 100% as it appeared to be, against say non-conventional attacks. Something could be missed, some bug/fault can occur. I'm not saying this arbitrarily by guessing. In tech, unexpected things can happen. Proven technology modules can fail in rare cases in what they intend to do.</p><p></p><p>True. Not disagreeable. However as I said earlier, you cannot always be 100% sure in <em>all the cases </em>that the malware was <em>blocked before execution</em>. In case of anti-exes, maybe 100% might be possible unless a non-apparent fault occurs in protection due to instability by system/malware or an unforeseen bug in the AV. Again, I'm only talking about chances.</p><p></p><p></p><p></p><p>I wonder how many people do actually think of such cases as <em>really clean (non-infected) systems. </em>Hey, you missed NPE that might increase the chances of finding a <em>few</em> more nasties<em><strong> <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" class="smilie smilie--sprite smilie--sprite110" alt=";)" title="Wink ;)" loading="lazy" data-shortname=";)" />. </strong></em>Probably the use of RegShot/FolderChangeView can help more<em><strong>.</strong></em></p><p>Hopefully some or many of us here are aware that sophisticated attacks cannot be simulated at testing hubs and that "no detections" does not always imply "no infection always".</p><p></p><p></p><p>On paper, we can agree with this in cases of VDS, Comodo, Avast Hardened Mode. But does Avira cloud just entirely block the <em>executable sample </em>from running OR does it happen that the sample might run (could have been found safe) but the sub-processes or other executables that the main sample process downloads are blocked? If the original sample itself is blocked from running (to be scanned by cloud to find malicious content) and quarantined, we might consider it as blocked from memory (if it is exactly so). But I earlier gave an example where a tester can perhaps misinterpret which process was blocked. The tester might see that the main sample was blocked (hence samples didn't touch memory at all), however actually the main sample process ran but a newly forked process (with the same name) was blocked from running. In the above case, the main sample could run but the tester couldn't grasp it. It seemed that the main sample was blocked.</p><p>The runtime hierarchies of processes etc. need to be observed well and not everything is visible in all cases as [USER=59377]@Winter Soldier[/USER] said.</p><p></p><p>Who said we absolutely need scanning in this case? The way we collect and share samples for downloading by testers, we can fairly consider '100% static detection of samples' = 'clean'. However this is more of a convention followed for uniformity at MH & for the audience to see the state of the tester's system. Though not absolutely necessary.</p></blockquote><p></p>
[QUOTE="Parsh, post: 660750, member: 58090"] That's what I've been trying to convey. We cannot be sure just by observing that the product was able to perform 100% as it appeared to be, against say non-conventional attacks. Something could be missed, some bug/fault can occur. I'm not saying this arbitrarily by guessing. In tech, unexpected things can happen. Proven technology modules can fail in rare cases in what they intend to do. True. Not disagreeable. However as I said earlier, you cannot always be 100% sure in [I]all the cases [/I]that the malware was [I]blocked before execution[/I]. In case of anti-exes, maybe 100% might be possible unless a non-apparent fault occurs in protection due to instability by system/malware or an unforeseen bug in the AV. Again, I'm only talking about chances. I wonder how many people do actually think of such cases as [I]really clean (non-infected) systems. [/I]Hey, you missed NPE that might increase the chances of finding a [I]few[/I] more nasties[I][B] ;). [/B][/I]Probably the use of RegShot/FolderChangeView can help more[I][B].[/B][/I] Hopefully some or many of us here are aware that sophisticated attacks cannot be simulated at testing hubs and that "no detections" does not always imply "no infection always". On paper, we can agree with this in cases of VDS, Comodo, Avast Hardened Mode. But does Avira cloud just entirely block the [I]executable sample [/I]from running OR does it happen that the sample might run (could have been found safe) but the sub-processes or other executables that the main sample process downloads are blocked? If the original sample itself is blocked from running (to be scanned by cloud to find malicious content) and quarantined, we might consider it as blocked from memory (if it is exactly so). But I earlier gave an example where a tester can perhaps misinterpret which process was blocked. The tester might see that the main sample was blocked (hence samples didn't touch memory at all), however actually the main sample process ran but a newly forked process (with the same name) was blocked from running. In the above case, the main sample could run but the tester couldn't grasp it. It seemed that the main sample was blocked. The runtime hierarchies of processes etc. need to be observed well and not everything is visible in all cases as [USER=59377]@Winter Soldier[/USER] said. Who said we absolutely need scanning in this case? The way we collect and share samples for downloading by testers, we can fairly consider '100% static detection of samples' = 'clean'. However this is more of a convention followed for uniformity at MH & for the audience to see the state of the tester's system. Though not absolutely necessary. [/QUOTE]
Insert quotes…
Verification
Post reply
Top