- Jun 9, 2013
- 6,720
Very well put @silversurferI don't like arrogant experts like this. I am may be an amateur, but I know which samples are dangerous
Please provide comments and solutions that are helpful to the author of this topic.
Very well put @silversurferI don't like arrogant experts like this. I am may be an amateur, but I know which samples are dangerous
Just because you are not coding these products, does not mean you are not professional in how you carry yourself and how you test.I don't like arrogant experts like this. I am may be an amateur, but I know which samples are dangerous
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.They (Emsi devs) should separate the frustration from the reality: they might not be satisfied with the results in the HUB but that is the reality and Emsisoft has certain results.
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.
I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.
This is understandable.Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying.
This how ever is globalizing all users into a category, condemning all here, which is something i have a problem with. Have i seen users get impatient and not allow the malware to complete its sequence, or the user not provide enough information of what they observed, well yes, but not all the time, and definitely not by all testers/users here.Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.
This is simply rediculous to even admit for yourself, as one that can point out flaws quickly, you should be well aware almost everyone of the Youtube testers are using Virussign samples, old, very high detection averaging 45/57 VT scores before being tested. This does make the PRODUCT shine, but is not accurately showing how the product will stand against low detection/prevalent in the wild malware, which you of all people in this conversation, should understand the importance of. The fact you would back these testers "because your product looks good" and not the ones testing fresher samples, and submitting them, says much about you.I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.
That is where you and I Differ, as i know the sample set is small but more prevalent, and it only takes 1 sample to infect the system when missed, so large amounts of sample do not impress me when they constitute 2% of the samples in the Wild.I think we must have different definitions of what a "dirty/guilty pleasure" is. At least my understanding is, that it is something that is generally accepted to be complete and utter trash and nobody should indulge in it, but that you indulge in anyway and feel a bit ashamed, dirty or guilty for because of it. So how you can conjure up any kind of endorsement from that is just as baffling to me as as the "Missed (?)" verdicts I find in almost all Malware Hub "tests". To spell it out for you: They are at least equally as bad as the Malware Hub tests, probably even worse. Drawing information from such a small sample set tells you nothing about the overall protection. You would need at least a statistically significant number of samples to be able to draw some conclusion.
Actually, the results don't really bother me. Just people trying to tell us that we have or not have to implement certain things and using those results as some kind of argument are annoying. Here is the problem: None of you ever looks at why malware behaved the way it did. Instead you conjure up a miss. Oh wscript.exe threw an error, because it is a downloader and the download URL is down or was blocked? Clearly that's a miss! Oh, the ransomware detects that a certain product is installed (Cerber for example) and doesn't do anything that could be detected? Clearly that's a miss! The unpacking loop of the obfuscator takes a minute and the tester got impatient? Clearly a miss right there! C2 server down, so nothing happens? Miss miss miss! It's ridiculous. If you ever end up in a situation during a test where you feel tempted to either put a question mark behind your verdict, because you don't really know, it should be a huge indicator that either you aren't fit to do the test or that the methodology you work with is way too vague and could probably benefit from some clarifying addendums.
I don't mind being tested to be honest. The more the merrier really and data is always interesting. I even like YouTube reviews. They are a dirty pleasure of mine and you can rest assured when someone does a test of a product I am interested in, one of the first clicks on that video is from me (<3 Google Alerts). But at least with the exception of one or two YouTube testers, they know that what they do isn't really professional and pretty much insignificant. I mean, even if we would miss every single sample every single day. All it would show is that we miss like 5 samples out of the 300,000 - 500,000 we receive every day. Not more, not less.
You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.Stating you are first to click the like button on those videos encourages them to keep making them incorrectly, and is a pleasure, whether dirty or not, is still encouraging it.
I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mindIf a sample is missed, or the verdict not present can also depend on may variables as you know quite well. Anything from the sample being containment aware to inconsistencies from testing in a contained environment. The tests are a baseline, and full analysis is not done on the samples, you are correct. It still does not take away from the fact that the sample was not detected by signature, and by submitting it helps improve your database of signatures, which i have yet seen a thank you for their help volunteering to help your product.
You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.
I never said that I click like on those videos. I said I click on those videos, because that is what you have to do in order to watch them.
I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind
Oh good, he can twist words, im impressed.You have some significant reading comprehension and temper issues it seems. Calm down, take your blood pressure meds or grab a paper bag to breathe into. I had a stroke incident in my family recently. It's not pretty.
I never said that I click like on those videos. I said I click on those videos, because that is what you have to do in order to watch them.
I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind
That is where you and I Differ, as i know the sample set is small but more prevalent, and it only takes 1 sample to infect the system when missed, so large amounts of sample do not impress me when they constitute 2% of the samples in the Wild.
If you really want to test malware properly, you need to use tools like regshot and a network sniffer to figure out what changes are made to the system when an individual sample is run. Record those, then reset the VM. Repeat for each sample. Then enable the AV you are testing, run the individual sample with regshot and the network sniffer running. Compare changes made. If the sample did no changes on the first run then you cannot consider it a miss necessarily if the AV doesn't catch it, as the sample may be VM aware. You have to know what changes to look for, however.I'm very interested to know what is the right methodology. I appreciate a lot Emsi, but you'd be ready to accept criticisms, especially when these are not favorable at all. This is a site where Emsisoft has been always well considered (in fact, i knew about your software in MT), so, sincerely i was a bit suprised with your reaction. If you consider Malware Vault rubbish, all suggestions are welcomed to improve this methodology. Thanks
Regards
So is everyone else to be honest. If you find an answer, patent it and open your own testing labs. You will be rich in no time.I'm very interested to know what is the right methodology.
Actually, yes. It is a big difference. Because a like would be an actual endorsement. In fact, if you had just misunderstood what I meant, it would clear a lot of the confusion up why you thought I was endorsing those videos.Im sorry i was not accurate with you clicking on videos and not the like button, because that is really what is important here
You stated tests are baseline and no analysis is being performed. You named various problems with that, for example VM awareness, that requires proper analysis to be performed to be detected. That is lackluster in my book. You clearly think that is okay, as you clearly prefer those tests over others. Due to the lackluster way the test is performed, you can't trust those results though and just clicking through the threads and looking at inconsistencies like "Missed (?)" or where samples just crash or do nothing shows that this is not like a one in a million occurance either. If there was like one inconsistent sample every once in a while, I would say ignore it as well. But some days 50% or more of the samples showed such inconsistencies. That means all that is left when it comes to the usefulness of these tests for evaluation purposes is the signature portion, which you clearly dislike as well given your previous statement about the "right click scans".Where did i state they were lackluster, and i was fine with it, or was this wishful thinking on your part.
That is definitely your choice, but if you chose to do so because you just happened to disagree with 1 of 30 people that work at said company comes off as quite petty.And here, you forced me to respond again, even though i stated i would not, but im sure you will find a dig with this also. I can state right now, clearly, i will never recommend your products again, not because i do not think they are good, but because of you and how you carry yourself.
Can always join my personal fan club and haul insults at me. I don't mind. But hey, I think all the ransomware authors I piss off on a daily basis will be delighted to see, that I am actually just as bad as them. So maybe they will stop insulting me now.Im quite tired of this cut throat business, where as many Vendors are just as bad as the cybercriminals now days, this is my problem.
There are 300,000+ new samples released every day; the majority of these do not infect any users. No antivirus has a 100% rate of all of these 300,000 samples or so, so the "it only takes 1 sample to infect the system when missed" applies to the whole industry. You cannot say X AV has a better detection rate or make a decision on the quality of protection with such a small sample set. You might have picked 10 of the samples that certain AVs miss but others detect, whereas another random 10 the ones who did not detect the first set detect those and vice versa.
So is everyone else to be honest. If you find an answer, patent it and open your own testing labs. You will be rich in no time.
Seriously though, proper testing of AV software is hard. The problem is, there are always some products that do things differently and that your methodology ideally will have to account for. Webroot is a product that regularly causes problems in tests due to the unique way they work and we have caused all testers major headache as well due to our refusal to sift through our users traffic looking for URLs to filter, but instead filter on a much coarser level.
I thought signatures are all crap anyway and the benefit of the new and improved Malware Hub is that they look at the whole picture and not just signatures. Now you are telling me, it is fine that the behaviour portion of the tests are lackluster and you are fine with it, because at least the signature portion is done right and provides a baseline value already. Make up your mind
"The" correct procedures? No. Better procedures? Yes.So here, you state no one in the professional communities knows how to properly test, is this an example of make up our minds? Because you find it quite easy to dismiss and even condemn testing done here, which lead me to believe and im sure others, that you know the correct procedures.
I can guarantee you, it is. Unless VT has some problems it will take less than a minute until VT sent me your file together with any results of other AVs straight to our backend. The SQL query to add it to the database is only milliseconds away from that point. And yes, it is that instant. It may still take a bit for the detection to show up on VT, but that is usually because the command line versions used by VT are lacking features or have cloud communication disabled. However, for the real product it will be there.The samples are submitted to VT before testing, and also to either Malwr or Hybrid-Analysis to be analyzed to show validity. Yes i agree many vendors do utilize VT, but it is not instantly as you project.
Okay, pointless then. I would call them crap thoughI did not state they were crap, i said JUST testing Signatures only is pointless, as it does not show the products full ability to keep the system protected.
They do, the number can get up to 500,000 on busy days. Whilst they are not mostly new malware, often malware author will change enough so they aren't detected via signatures. This is where behaviour blocking comes in, and testing of behaviour blocking properly is important.Im sure those number fluctuate and of course really depend on whether they are modified samples as many are, meaning they still show characteristics of the original file and should still be caught by the other modules of most products. Not to mention that many are not wide spread but mainly found in a particular geolocation. There are as already stated, many variables.
So here, you state no one in the professional communities knows how to properly test, is this an example of make up our minds? Because you find it quite easy to dismiss and even condemn testing done here, which lead me to believe and im sure others, that you know the correct procedures.
I think most people agree with this statement.Lucent Warrior said:I did not state they were crap, i said JUST testing Signatures is pointless, as it does not show the products full ability to keep the system protected.
So then why when your product is tested within a couple hours, does it not always have 100% signature detection. Why do other products not have this within the first couple hours if instant. I think maybe you should spend more time watching the tests and products and re-read some of my answers.I can guarantee you, it is. Unless VT has some problems it will take less than a minute until VT sent me your file together with any results of other AVs straight to our backend. The SQL query to add it to the database is only milliseconds away from that point. And yes, it is that instant. It may still take a bit for the detection to show up on VT, but that is usually because the command line versions used by VT are lacking features or have cloud communication disabled. However, for the real product it will be there.
Because doing it that way causes a high number of false positives. Essentially you increase your false positive rate by times 50+ as you will combine every single false positive there ever is on VT. Ever wondered how some AVs in AV-C or AV-T manage to get like a hundred false positives in a test? Well, you no longer have to wonderSo then why when your product is tested within a couple hours, does it not always have 100% signature detection by your product.