Advice Request Emsisoft static detection - is it getting worse?

Please provide comments and solutions that are helpful to the author of this topic.

Status
Not open for further replies.

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
They (Emsi devs) should separate the frustration from the reality: they might not be satisfied with the results in the HUB but that is the reality and Emsisoft has certain results. ;)
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.

I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.
 

Ana_Filiz

Level 4
Verified
Well-known
Aug 23, 2016
193
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.

I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.

Thank you for your answer Mr. Wosar, I can assure you that Emsisoft support is a state of art and that was tested and confirmed by many users and customers. Hats off for this. Regarding the testing and the Hub, I appreciate the support of the hub team for us, the users. As for the devs, some appreciate it some not, it is up to you. :cool:
 
L

Lucent Warrior

Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying.
This is understandable.

Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.
This how ever is globalizing all users into a category, condemning all here, which is something i have a problem with. Have i seen users get impatient and not allow the malware to complete its sequence, or the user not provide enough information of what they observed, well yes, but not all the time, and definitely not by all testers/users here.

I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.
This is simply rediculous to even admit for yourself, as one that can point out flaws quickly, you should be well aware almost everyone of the Youtube testers are using Virussign samples, old, very high detection averaging 45/57 VT scores before being tested. This does make the PRODUCT shine, but is not accurately showing how the product will stand against low detection/prevalent in the wild malware, which you of all people in this conversation, should understand the importance of. The fact you would back these testers "because your product looks good" and not the ones testing fresher samples, and submitting them, says much about you.
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
I think we must have different definitions of what a "dirty/guilty pleasure" is. At least my understanding is, that it is something that is generally accepted to be complete and utter trash and nobody should indulge in it, but that you indulge in anyway and feel a bit ashamed, dirty or guilty for because of it. So how you can conjure up any kind of endorsement from that is just as baffling to me as as the "Missed (?)" verdicts I find in almost all Malware Hub "tests". To spell it out for you: They are at least equally as bad as the Malware Hub tests, probably even worse. Drawing information from such a small sample set tells you nothing about the overall protection. You would need at least a statistically significant number of samples to be able to draw some conclusion.
 
L

Lucent Warrior

I think we must have different definitions of what a "dirty/guilty pleasure" is. At least my understanding is, that it is something that is generally accepted to be complete and utter trash and nobody should indulge in it, but that you indulge in anyway and feel a bit ashamed, dirty or guilty for because of it. So how you can conjure up any kind of endorsement from that is just as baffling to me as as the "Missed (?)" verdicts I find in almost all Malware Hub "tests". To spell it out for you: They are at least equally as bad as the Malware Hub tests, probably even worse. Drawing information from such a small sample set tells you nothing about the overall protection. You would need at least a statistically significant number of samples to be able to draw some conclusion.
That is where you and I Differ, as i know the sample set is small but more prevalent, and it only takes 1 sample to infect the system when missed, so large amounts of sample do not impress me when they constitute 2% of the samples in the Wild.

Stating you are first to click the like button on those videos encourages them to keep making them incorrectly, and is a pleasure, whether dirty or not, is still encouraging it.

If a sample is missed, or the verdict not present can also depend on may variables as you know quite well. Anything from the sample being containment aware to inconsistencies from testing in a contained environment. The tests are a baseline, and full analysis is not done on the samples, you are correct. It still does not take away from the fact that the sample was not detected by signature, and by submitting it helps improve your database of signatures, which i have yet seen a thank you for their help volunteering to help your product.

This is the last i will comment in this thread, you are doing a good enough job proving some points all on your own.
 

Malakke

Level 5
Thread author
Verified
Well-known
Jan 29, 2013
221
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.

I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.

I'm very interested to know what is the right methodology. I appreciate a lot Emsi, but you'd be ready to accept criticisms, especially when these are not favorable at all. This is a site where Emsisoft has been always well considered (in fact, i knew about your software in MT), so, sincerely i was a bit suprised with your reaction. If you consider Malware Vault rubbish, all suggestions are welcomed to improve this methodology. Thanks

Regards
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
Stating you are first to click the like button on those videos encourages them to keep making them incorrectly, and is a pleasure, whether dirty or not, is still encouraging it.
You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.
I never said that I click like on those videos. I said I click on those videos, because that is what you have to do in order to watch them.

If a sample is missed, or the verdict not present can also depend on may variables as you know quite well. Anything from the sample being containment aware to inconsistencies from testing in a contained environment. The tests are a baseline, and full analysis is not done on the samples, you are correct. It still does not take away from the fact that the sample was not detected by signature, and by submitting it helps improve your database of signatures, which i have yet seen a thank you for their help volunteering to help your product.
I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind :)
 

Malakke

Level 5
Thread author
Verified
Well-known
Jan 29, 2013
221
You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.
I never said that I click like on those videos. I said I click on those videos, because that is what you have to do in order to watch them.


I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind :)

In my country say: "Tirar piedras contra tu propio tejado". No need to be so rude Fabian.
 
L

Lucent Warrior

You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.
I never said that I click like on those videos. I said I click on those videos, because that is what you have to do in order to watch them.


I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind :)
Oh good, he can twist words, im impressed. :rolleyes:

Im sorry i was not accurate with you clicking on videos and not the like button, because that is really what is important hereo_O:rolleyes:

Where did i state they were lackluster, and i was fine with it, or was this wishful thinking on your part. I stated it is a baseline, as the samples are not fully analyzed in the tests and variables can and do effect them, although seeing the sample drop something in the files system, or establish a connection is enough to understand that it was allowed to do so from the product when missed, hence baseline. Unlike you, im not making a dime doing this, and neither are the Volunteers, how about showing a little respect at some point.

And here, you forced me to respond again, even though i stated i would not, but im sure you will find a dig with this also. I can state right now, clearly, i will never recommend your products again, not because i do not think they are good, but because of you and how you carry yourself.

Im quite tired of this cut throat business, where as many Vendors are just as bad as the cybercriminals now days, this is my problem.
 

Sengieffe

Level 1
Aug 22, 2015
4
That is where you and I Differ, as i know the sample set is small but more prevalent, and it only takes 1 sample to infect the system when missed, so large amounts of sample do not impress me when they constitute 2% of the samples in the Wild.

There are 300,000+ new samples released every day; the majority of these do not infect any users. No antivirus has a 100% rate of all of these 300,000 samples or so, so the "it only takes 1 sample to infect the system when missed" applies to the whole industry. You cannot say X AV has a better detection rate or make a decision on the quality of protection with such a small sample set. You might have picked 10 of the samples that certain AVs miss but others detect, whereas another random 10 the ones who did not detect the first set detect those and vice versa.

I'm very interested to know what is the right methodology. I appreciate a lot Emsi, but you'd be ready to accept criticisms, especially when these are not favorable at all. This is a site where Emsisoft has been always well considered (in fact, i knew about your software in MT), so, sincerely i was a bit suprised with your reaction. If you consider Malware Vault rubbish, all suggestions are welcomed to improve this methodology. Thanks

Regards
If you really want to test malware properly, you need to use tools like regshot and a network sniffer to figure out what changes are made to the system when an individual sample is run. Record those, then reset the VM. Repeat for each sample. Then enable the AV you are testing, run the individual sample with regshot and the network sniffer running. Compare changes made. If the sample did no changes on the first run then you cannot consider it a miss necessarily if the AV doesn't catch it, as the sample may be VM aware. You have to know what changes to look for, however.
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
I'm very interested to know what is the right methodology.
So is everyone else to be honest. If you find an answer, patent it and open your own testing labs. You will be rich in no time.

Seriously though, proper testing of AV software is hard. The problem is, there are always some products that do things differently and that your methodology ideally will have to account for. Webroot is a product that regularly causes problems in tests due to the unique way they work and we have caused all testers major headache as well due to our refusal to sift through our users traffic looking for URLs to filter, but instead filter on a much coarser level.

AMTSO has a nice collection of documents and guidelines. And while they are nowhere perfect, they are the closest the industry has come to formulating guidelines that everyone can kind of live and agree with. Even if you don't agree with them or think that they are too heavily influenced by AV vendors, at least read them and maybe evaluate if some aspects would at least seem useful to you and adopt them.

For one thing, the fact that all samples are being uploaded to VT first is a major issue. There are a lot of products out there, that will automatically blacklist every single file submitted to VT that has at least one detection. Those products will always get 100% in the test as it is currently performed, suggesting superior detection when all you did was essentially give them the sample and told them that in 5 minutes you will stop by and see if their script on the backend has added the hashes to the cloud yet. Hint: It will.
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
Im sorry i was not accurate with you clicking on videos and not the like button, because that is really what is important hereo_O:rolleyes:
Actually, yes. It is a big difference. Because a like would be an actual endorsement. In fact, if you had just misunderstood what I meant, it would clear a lot of the confusion up why you thought I was endorsing those videos.

Where did i state they were lackluster, and i was fine with it, or was this wishful thinking on your part.
You stated tests are baseline and no analysis is being performed. You named various problems with that, for example VM awareness, that requires proper analysis to be performed to be detected. That is lackluster in my book. You clearly think that is okay, as you clearly prefer those tests over others. Due to the lackluster way the test is performed, you can't trust those results though and just clicking through the threads and looking at inconsistencies like "Missed (?)" or where samples just crash or do nothing shows that this is not like a one in a million occurance either. If there was like one inconsistent sample every once in a while, I would say ignore it as well. But some days 50% or more of the samples showed such inconsistencies. That means all that is left when it comes to the usefulness of these tests for evaluation purposes is the signature portion, which you clearly dislike as well given your previous statement about the "right click scans".

And here, you forced me to respond again, even though i stated i would not, but im sure you will find a dig with this also. I can state right now, clearly, i will never recommend your products again, not because i do not think they are good, but because of you and how you carry yourself.
That is definitely your choice, but if you chose to do so because you just happened to disagree with 1 of 30 people that work at said company comes off as quite petty.

Im quite tired of this cut throat business, where as many Vendors are just as bad as the cybercriminals now days, this is my problem.
Can always join my personal fan club and haul insults at me. I don't mind. :) But hey, I think all the ransomware authors I piss off on a daily basis will be delighted to see, that I am actually just as bad as them. So maybe they will stop insulting me now.
 
L

Lucent Warrior

There are 300,000+ new samples released every day; the majority of these do not infect any users. No antivirus has a 100% rate of all of these 300,000 samples or so, so the "it only takes 1 sample to infect the system when missed" applies to the whole industry. You cannot say X AV has a better detection rate or make a decision on the quality of protection with such a small sample set. You might have picked 10 of the samples that certain AVs miss but others detect, whereas another random 10 the ones who did not detect the first set detect those and vice versa.

Im sure those number fluctuate and of course really depend on whether they are modified samples as many are, meaning they still show characteristics of the original file and should still be caught by the other modules of most products. Not to mention that many are not wide spread but mainly found in a particular geolocation. There are as already stated, many variables.

So is everyone else to be honest. If you find an answer, patent it and open your own testing labs. You will be rich in no time.

Seriously though, proper testing of AV software is hard. The problem is, there are always some products that do things differently and that your methodology ideally will have to account for. Webroot is a product that regularly causes problems in tests due to the unique way they work and we have caused all testers major headache as well due to our refusal to sift through our users traffic looking for URLs to filter, but instead filter on a much coarser level.

So here, you state no one in the professional communities knows how to properly test, is this an example of make up our minds? Because you find it quite easy to dismiss and even condemn testing done here, which lead me to believe and im sure others, that you know the correct procedures.

The samples are submitted to VT before testing, and also to either Malwr or Hybrid-Analysis to be analyzed to show validity. Yes i agree many vendors do utilize VT, but it is not instantly as you project. Testing these samples has already been stated in this forum, time sensitive. Matter of fact, when someone mentioned your product and it not fairing well, i immediately addressed it and told them to watch for the time it was tested after the samples were posted, because if it was within a few hours the samples have low detection, in which case, many products have and do miss those. Generally with in 4 to 8 hours after the samples hit for testing you will see detection rates start to climb with the products.

Either way, you have insulted many good people that volunteer efforts to help you and your user base.

Now im finished as i have better things to do, like help others that actually need it.
 
Last edited by a moderator:
L

Lucent Warrior

I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind :)

P.S.

I did not state they were crap, i said JUST testing Signatures only is pointless, as it does not show the products full ability to keep the system protected.
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
So here, you state no one in the professional communities knows how to properly test, is this an example of make up our minds? Because you find it quite easy to dismiss and even condemn testing done here, which lead me to believe and im sure others, that you know the correct procedures.
"The" correct procedures? No. Better procedures? Yes.

The samples are submitted to VT before testing, and also to either Malwr or Hybrid-Analysis to be analyzed to show validity. Yes i agree many vendors do utilize VT, but it is not instantly as you project.
I can guarantee you, it is. Unless VT has some problems it will take less than a minute until VT sent me your file together with any results of other AVs straight to our backend. The SQL query to add it to the database is only milliseconds away from that point. And yes, it is that instant. It may still take a bit for the detection to show up on VT, but that is usually because the command line versions used by VT are lacking features or have cloud communication disabled. However, for the real product it will be there.

I did not state they were crap, i said JUST testing Signatures only is pointless, as it does not show the products full ability to keep the system protected.
Okay, pointless then. I would call them crap though :p
 

Sengieffe

Level 1
Aug 22, 2015
4
Im sure those number fluctuate and of course really depend on whether they are modified samples as many are, meaning they still show characteristics of the original file and should still be caught by the other modules of most products. Not to mention that many are not wide spread but mainly found in a particular geolocation. There are as already stated, many variables.

So here, you state no one in the professional communities knows how to properly test, is this an example of make up our minds? Because you find it quite easy to dismiss and even condemn testing done here, which lead me to believe and im sure others, that you know the correct procedures.
They do, the number can get up to 500,000 on busy days. Whilst they are not mostly new malware, often malware author will change enough so they aren't detected via signatures. This is where behaviour blocking comes in, and testing of behaviour blocking properly is important.

It's more a matter of there are different ideas of how to test, and there are certain hurdles you have to overcome. No solution is perfect for testing, that's why disputes happen even for AVTest. You don't have to know what the perfect solution is to say that a certain testing method has flaws. I did suggest a possible alternative to at least make sure an antivirus has definitely missed a sample, or whether the sample is just not making any changes. I will not say it does not have flaws, as it's something I wrote very quickly.

Lucent Warrior said:
I did not state they were crap, i said JUST testing Signatures is pointless, as it does not show the products full ability to keep the system protected.
I think most people agree with this statement.
 
L

Lucent Warrior

I can guarantee you, it is. Unless VT has some problems it will take less than a minute until VT sent me your file together with any results of other AVs straight to our backend. The SQL query to add it to the database is only milliseconds away from that point. And yes, it is that instant. It may still take a bit for the detection to show up on VT, but that is usually because the command line versions used by VT are lacking features or have cloud communication disabled. However, for the real product it will be there.
So then why when your product is tested within a couple hours, does it not always have 100% signature detection. Why do other products not have this within the first couple hours if instant. I think maybe you should spend more time watching the tests and products and re-read some of my answers.
 

Fabian Wosar

From Emsisoft
Verified
Developer
Well-known
Jun 29, 2014
260
So then why when your product is tested within a couple hours, does it not always have 100% signature detection by your product.
Because doing it that way causes a high number of false positives. Essentially you increase your false positive rate by times 50+ as you will combine every single false positive there ever is on VT. Ever wondered how some AVs in AV-C or AV-T manage to get like a hundred false positives in a test? Well, you no longer have to wonder ;)
Personally I also feel it is unethical to copy other company's detections. So we prefer to do it the old fashion way and analyse samples ourselves. But there are a lot of vendors who would disagree with that.
 
Status
Not open for further replies.

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top