Do you really understand AV test results?

Andy Ful

From Hard_Configurator Tools
Thread author
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
Eset also uses machine learning:notworthy: that's how it has good static detection rate!! Machine learning by ESET: The road to Augur
Threat intelligence solution for your organization
Thanks for the links. Augur AI seems to use the similar idea as Avast. If so, then it can be tested in the standard tests, because it does not use the postinfection signatures. Avoiding postinfection signatures is a natural move for the paid AVs, because their protection cannot be underestimated in the standard tests.
 
D

Deleted member 65228

Eset also uses machine learning:notworthy: that's how it has good static detection rate!! Machine learning by ESET: The road to Augur
Threat intelligence solution for your organization
ESET tend to be excellent when it comes to creating generic signatures; generic signatures is part of static heuristics and this allows the scanning engine to flag samples which have similarities to other samples that have previously been found in the wild, despite the specific sample in question not being seen and processed by them specifically yet. It is typically used alongside standard checksum hash detection, which is flawed and hence the introduction to generic signatures. Vendors could not keep up with how many new malicious samples were being pushed into the wild (and because an attacker can make one single change to the bytes to refresh their checksum hash) so they introduced generic signatures which usually relies on a byte pattern - this byte pattern is found within the target PE and causes an internal flag.

As an example.
1. FooBar.exe
- Hash: XXXXXXXXXXXX.... (length depends on hash type such as MD5, SHA-1 or SHA-256 -> SHA-1/SHA-256 is more secure than MD5 but also takes longer to calculate)
- The hash is added to the signature database to detect the threat.
- The attacker now changes 1 byte in the thousands of bytes within the PE and now the hash != (is not equal) to the hash in the database, and thus now the product does not detect it.
- Generic signatures can be applied for the initial adding to the database along with the hash checksum using a pattern which can be used to identify both the sample as well as any other samples either known/unknown which may also have the same pattern, without flagging many clean samples (generic signatures requires reliability and therefore needs to be created by someone with experience).

FooBar.exe may have a set pattern of bytes, for demonstration sake we will use a fake pattern which is invalid: AA 55 BA 6A BB CC DD EE FF G1 (0xAA, 0x55, 0xBA, 0x6A, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF, 0xG1 - 0x first = for bytes, but the pattern is HEX).

Now when the attacker changes the hash checksum by manipulating one byte/multiple bytes in the PE, as long as the bytes for the signature are not touched (and sometimes this requires the code to be re-written to avoid the detection so the byte pattern becomes non-existent depending on the circumstances), the checksum is changed but the byte pattern still exists. This means the sample is still flagged, even if the attacker added more functionality and re-compiled the binary and it still hasn't been seen by anyone but the attacker yet.

FooBar2.exe may come along in the wild, developed by someone else. It may do something that FooBar.exe from attacker 1 did for his malware sample, but depending on the strength of the signature and what the signature represents/if the attacker happened to have the same code-base for the feature used to make the signature (e.g. it isn't uncommon for malware authors to copy-paste/re-use others code for malware in the wild being targeted at home users) = now FooBar2.exe is also picked up, despite it never being seen and may not even be the same variant type as FooBar.exe.

The aim of generic signatures is to identify other samples whether known/unknown with one signature/a few without having to keep track of all the checksum hashes for the same threat if the hash is being regularly changed before the sample gets released back into the wild, as well as similar threats either known/unknown for the same reason.

There are other types of generic flagging as well, such as flagging combination of Imports or due to suspicious Strings... Or other characteristics in combination such as entropy, PE file details/digital signature verification, binary file-name (e.g. impostor of svchost.exe or winlogon.exe, ...) etc.

Anti-Virus products like ESET have a good memory scanner, which is useful for tackling packers... Packers are a technique used to mask a PE, whether clean or malicious. It is used for both genuine and non-genuine purposes because some people are paranoid and use it to help "protect" their software from reverse engineering (even though an AV analyst is not going to sweat to analyse it, and someone experienced enough whether working for an AV vendor or not won't either most of the time). I'll explain how memory scanning works for you now.

1. FooBar.exe has a generic signature applied for it in the virus signature database, but the checksum is unknown and it has been re-released into the wild with packing techniques to evade detection. It may be completely undetected on VirusTotal due to this, and can be more difficult/time consuming for an analyst to reverse engineer because factors such as the Import Address Table, Strings, etc. are all messed up, and anti-debugging may also be applied.
2. FooBar.exe is executed by the user/other malware already running on the system.
3. The Anti-Virus products intercepts this process start-up request via a kernel-mode callback (nowadays), which is invoked by an undocumented Psp* kernel-mode routine after NtCreateUserProcess (NTOSKRNL routine -> invoked via a system call from NTDLL.DLL in user-mode) is called.
4. The Anti-Virus product can now scan the file on-disk corresponding to the image being loaded into memory for the new start-up request, typically via the normal database (e.g. checksum and byte pattern).
5. The verdict is clean because the sample is packed and thus the generic signature is no longer valid in this context, and the hash is unknown to the database.
6. Cloud scanning may now be applied at this time, or after execution (some vendors let the unknown program run and do a cloud scan asynchronously for performance reasons).
7. The verdict may still be clean so the callback code is finished and the request is continued.
8. The sample runs however it is still being monitored by the AV product. Characteristics in-memory will be "Unpacked" (decrypted should I say) or in some other scenarios (less common nowadays) another PE is dropped to disk which is the unpacked copy and executed and the original terminates.
->>>> If the latter then that new unpacked copy is scanned from #1 process again and gets identified due to the generic signature as it would now be valid in this scenario if there was one for the sample.
9. Now the unpacking has occurred in memory, the byte signature pattern is valid in-memory! Now the vendor flags it, terminates the process and auto-quarantines depending on user configuration.

Memory scanning usually works by finding the base address of the image in-memory and then moving through different chunks and using each chunk for the byte signature scanning. Smarter byte scanning will notice if the end of a chunk matches part of the start of a signature and will move back X bytes to get a better chunk to accurately identify if there was a match there or not, and if not then keep moving forward.

Memory scanning can also include scanning new modules as they are loaded into memory, also typically done through a kernel-mode callback.

---
Machine Learning/Ai is very beneficial for detection though, especially when it comes to vendors like ESET, Avast and Bitdefender due to the resources they have... You can be sure they have a very huge network to handle it all effectively and efficiently. I'm sure we'll see some amazing improvements with all of that over the next few years, and we cannot forget about Avira too, they provide their Machine Learning with their SDK.

All in all ESET especially use a combination of techniques from checksum hash scanning to generic signatures (with a good memory scanner) and Machine Learning technology. Vendors (I believe ESET are likely one who count for this as well) also tend to focus on emulation technology, extremely beneficial as well.

Btw Tencent has the largest Ai lab:
I recon that vendors like Norton likely have the largest malware intelligence database since they've been around since more or less the very start, but I don't know for Ai... Interesting :)
 
Last edited by a moderator:

Sunshine-boy

Level 28
Verified
Top Poster
Well-known
Apr 1, 2017
1,782
Opcode :notworthy: I'm an average user and you fill my head with a lot of data and information xd I don't even know what is NTOSKRNL routine :D but thanks for the explanations! maybe others will benefit your knowledge!For me, it take 1 hour to understand your comments completely hahah
don't know for Ai..
I'm not 100% sure but I think they are working hard!
 

Windows_Security

Level 24
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Mar 13, 2016
1,298
The engineers for Windows Defender don't necessarily have intelligence for undocumented internals of the Windows NT Kernel all the time. They may gain intelligence to silent things sometimes, but not always. Windows Defender typically uses techniques that other vendors have done for much longer, or techniques newly introduced for new OS versions which are also accessible to third-parties. Even if Windows Defender does something not exposed to third-parties by default, such as an undocumented kernel-mode callback, some reverse engineering and bobs your uncle now you know how Windows Defender does it. If Windows Defender can do it, so can a third-party.

Microsoft have to keep their eyes pealed and re-assess everything they do for Windows Defender. If they silently implement something to let Windows Defender gain an advantage, if it gets exposed through reverse engineering/analysis then it could be abused for some really bad things by criminals.

There are engineers at vendors like Norton and Kaspersky who have been working with Windows NT Kernel since the start of security software for it. They may not work at Microsoft and be involved with the development, but they are bound to know a ton and it is usually these same engineers/researchers that find and exploit vulnerabilities in the Windows NT Kernel/find new methods of doing something to mitigate the attack in advance.

When you translate it to normal business, you have to admit it sounds awkard

You are telling me that Toyota Engineers know a lot more of the engines of Ford or BMW or that the engineers of BMW know more about Bosch Fuel injection systems than the Bosch engineers (assuming BMW uses Bosch motor management systems).

Sophos provides the Sophos free before they bought the Sandboxie, SurfRight, and Invincea!

Sophos developed their own ML/AI before they bought Invincea. ;)
 
Last edited by a moderator:
  • Like
Reactions: Andy Ful
D

Deleted member 65228

You are telling me that Toyota Engineers know a lot more of the engines of Ford or BMW, does not makes sense to me.
No, that isn't what I am saying. It really depends on the scenario but the case of Microsoft engineers always knowing more than everyone else isn't always the case.

Microsoft have many different departments with many different teams of people. The people responsible for actually working on NTOSKRNL (the Windows Kernel) might have involvement with Windows Defender, but there is likely a whole separate team to focus on the malware protection mechanisms. The Microsoft team for Windows Defender might communicate with the main kernel engineers to request intelligence/changes, but Windows Defender is a lot more exposed and thus they have to re-assess what they do because if they do something strictly for Windows Defender, someone will find out and may be able to easily abuse it, and if important intelligence regarding undocumented sides of Windows which Microsoft really wants buried under a hatchet is exposed by Windows Defender, then it is easily exposed to everyone else.

Of course the people working on Windows Defender will be very experienced and have a lot of knowledge, but they'll be sticking to documented and officially supported mechanisms, unlikely to be using undocumented techniques being silently implemented into the OS Kernel which could just be abused by someone else to do things that Microsoft didn't want being done in the first place. Allowing Windows Defender to do something that someone else shouldn't be doing is not a good idea and also a security risk. Hence why they intercept the file system the same as third parties do, and probably for process execution scanning as well.

Microsoft neither give all intelligence to all the engineers from what I've been told from someone in the past which I knew who worked there as a programmer (not sure which department and they left eventually), and this makes perfect sense because it prevents critical leaks.

As for engineers at other AV vendors, there are people who have done extensive research on the Windows Kernel and have been with AV security vendors since the start... Look at Nirsoft, the amount of undocumented kernel structures they leaked back in the Windows Vista times was incredibly interesting -> now more than just Microsoft know about specific kernel structures which were originally 100% undocumented. Microsoft engineers come and go, old developers for things like the Windows Kernel may start working elsewhere and still maintain all their knowledge and vice-versa. New engineers may know less about the history of specific features... The Windows Kernel is huge, the code-base will be absolutely massive and I really doubt every single engineer there will know it from top to bottom. They most likely have tons of code which hasn't been updated for years for various things, or update parts of it at small intervals while maintaining more or less the same code-base/desired result unless absolutely necessary to re-do something completely. Whether you work at Microsoft or not, if you need to gain intelligence and have a reverse engineering skill-set then time is all is required... Which is how people discover secrets about things like PatchGuard to bypass it - now the researcher has documented it themselves and also exploited it.

I mean take it with a grain of salt but this seems more realistic to me.
 

Sunshine-boy

Level 28
Verified
Top Poster
Well-known
Apr 1, 2017
1,782
I didn't read anything about Sophos's and machine learning before 2017.
https://www.sophos.com/en-us/medial...forCybersecurityDemystifiedbySophos.pdf?la=en
In February 2017 Sophos acquired machine learning security firm Invincea and has spent the months since then synthesizing their technology and brainpower into SophosLabs and our product line. Along the way, our data scientists have written several articles revealing the workings of machine learning—deep learning, in particular—how it will be brought to bear by Sophos, and how that will help secure our customers
 
  • Like
Reactions: Andy Ful

Windows_Security

Level 24
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Mar 13, 2016
1,298
No, that isn't what I am saying. It really depends on the scenario but the case of Microsoft engineers always knowing more than everyone else isn't always the case.

Microsoft have many different departments with many different teams of people. The people responsible for actually working on NTOSKRNL (the Windows Kernel) might have involvement with Windows Defender, but there is likely a whole separate team to focus on the malware protection mechanisms. The Microsoft team for Windows Defender might communicate with the main kernel engineers to request intelligence/changes, but Windows Defender is a lot more exposed and thus they have to re-assess what they do because if they do something strictly for Windows Defender, someone will find out and may be able to easily abuse it, and if important intelligence regarding undocumented sides of Windows which Microsoft really wants buried under a hatchet is exposed by Windows Defender, then it is easily exposed to everyone else.

Of course the people working on Windows Defender will be very experienced and have a lot of knowledge, but they'll be sticking to documented and officially supported mechanisms, unlikely to be using undocumented techniques being silently implemented into the OS Kernel which could just be abused by someone else to do things that Microsoft didn't want being done in the first place. Allowing Windows Defender to do something that someone else shouldn't be doing is not a good idea and also a security risk. Hence why they intercept the file system the same as third parties do, and probably for process execution scanning as well.

Microsoft neither give all intelligence to all the engineers from what I've been told from someone in the past which I knew who worked there as a programmer (not sure which department and they left eventually), and this makes perfect sense because it prevents critical leaks.

As for engineers at other AV vendors, there are people who have done extensive research on the Windows Kernel and have been with AV security vendors since the start... Look at Nirsoft, the amount of undocumented kernel structures they leaked back in the Windows Vista times was incredibly interesting -> now more than just Microsoft know about specific kernel structures which were originally 100% undocumented. Microsoft engineers come and go, old developers for things like the Windows Kernel may start working elsewhere and still maintain all their knowledge and vice-versa. New engineers may know less about the history of specific features... The Windows Kernel is huge, the code-base will be absolutely massive and I really doubt every single engineer there will know it from top to bottom. They most likely have tons of code which hasn't been updated for years for various things, or update parts of it at small intervals while maintaining more or less the same code-base/desired result unless absolutely necessary to re-do something completely. Whether you work at Microsoft or not, if you need to gain intelligence and have a reverse engineering skill-set then time is all is required... Which is how people discover secrets about things like PatchGuard to bypass it - now the researcher has documented it themselves and also exploited it.

I mean take it with a grain of salt but this seems more realistic to me.

Departments not communicating with each other is something which applies to normal business as well, so now I am believing you ;)

On the other hand when Microsoft decided to make Windows Defender OS aware to improve intrusion detection and ATP response time for their business line products, my guess is that those development teams talked to each other.

But you make a valid point, dev team of Windows Defender has to comply with company guidelines in a strict way. So they have to be more Papist than the Pope (as we say in Dutch), giving engineers of other security companies probably more room to move.

I didn't read anything about Sophos's and machine learning before 2017.
https://www.sophos.com/en-us/medial...forCybersecurityDemystifiedbySophos.pdf?la=en
In February 2017 Sophos acquired machine learning security firm Invincea. our data scientists have written several articles

Well you just posted that Sophos already had data scientist working for them. :cool: just pulling your leg, I can't find it neither.
 
Last edited by a moderator:

Andy Ful

From Hard_Configurator Tools
Thread author
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
No, that isn't what I am saying. It really depends on the scenario but the case of Microsoft engineers always knowing more than everyone else isn't always the case.

Microsoft have many different departments with many different teams of people. The people responsible for actually working on NTOSKRNL (the Windows Kernel) might have involvement with Windows Defender, but there is likely a whole separate team to focus on the malware protection mechanisms. The Microsoft team for Windows Defender might communicate with the main kernel engineers to request intelligence/changes, but Windows Defender is a lot more exposed and thus they have to re-assess what they do because if they do something strictly for Windows Defender, someone will find out and may be able to easily abuse it, and if important intelligence regarding undocumented sides of Windows which Microsoft really wants buried under a hatchet is exposed by Windows Defender, then it is easily exposed to everyone else.

Of course the people working on Windows Defender will be very experienced and have a lot of knowledge, but they'll be sticking to documented and officially supported mechanisms, unlikely to be using undocumented techniques being silently implemented into the OS Kernel which could just be abused by someone else to do things that Microsoft didn't want being done in the first place. Allowing Windows Defender to do something that someone else shouldn't be doing is not a good idea and also a security risk. Hence why they intercept the file system the same as third parties do, and probably for process execution scanning as well.

Microsoft neither give all intelligence to all the engineers from what I've been told from someone in the past which I knew who worked there as a programmer (not sure which department and they left eventually), and this makes perfect sense because it prevents critical leaks.

As for engineers at other AV vendors, there are people who have done extensive research on the Windows Kernel and have been with AV security vendors since the start... Look at Nirsoft, the amount of undocumented kernel structures they leaked back in the Windows Vista times was incredibly interesting -> now more than just Microsoft know about specific kernel structures which were originally 100% undocumented. Microsoft engineers come and go, old developers for things like the Windows Kernel may start working elsewhere and still maintain all their knowledge and vice-versa. New engineers may know less about the history of specific features... The Windows Kernel is huge, the code-base will be absolutely massive and I really doubt every single engineer there will know it from top to bottom. They most likely have tons of code which hasn't been updated for years for various things, or update parts of it at small intervals while maintaining more or less the same code-base/desired result unless absolutely necessary to re-do something completely. Whether you work at Microsoft or not, if you need to gain intelligence and have a reverse engineering skill-set then time is all is required... Which is how people discover secrets about things like PatchGuard to bypass it - now the researcher has documented it themselves and also exploited it.

I mean take it with a grain of salt but this seems more realistic to me.
That can be especially true for Windows 7. But, Windows 10 is an object of so many quick changes, that Microsoft engineers can have a real advantage here. Who knows, we can only guess.

Sadly, it seems that they are focused on using AI for everything except AVs. But, this can change soon.
------------------------------------------------------------------------------------------
Kaspersky AI is also based on pre-execution data (except maybe KSN feature):
Machine learning in Kaspersky Endpoint Security 10 for Windows
https://media.kaspersky.com/pdf/KESB_Whitepaper_KSN_ENG_final.pdf
.
The same is true for AhnLab:
Cyber Security Industries Plan to Fight off High-Tech Cyber-Attacks with Artificial Intelligence
.
And for TrendMicro:
Achieving Real-Time Threat Prevention with TippingPoint Machine Learning -
.
And fro Sophos (very technical article including the AI model example):
https://www.sophos.com/en-us/medial...at-detection-model.pdf?cmp=70130000001xKqzAAE
 
Last edited:

Andy Ful

From Hard_Configurator Tools
Thread author
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
I do agree, I don't work for Microsoft so I do not know their exact management practices but I was just saying what I thought was more realistic. However even though I don't work for Microsoft, I can still see how some things work at ease with static analysis alone without needing to remotely debug the kernel of another target machine via WinDbg to track what is being called and for what:

rlY1tI.jpg


Now we all know that the fully undocumented routine within ntoskrnl.exe called PspCreateObjectHandle is invoked by NtCreateUserProcess (and PspCreateProcess which is called by NtCreateUserProcess somewhere down the line, or by routines called by NtCreateUserProcess) as an example, for Windows 10 on latest updates. Google the function, bet you'll find nothing but references from dumps of exported/internal routines. Yet we could understand how the parameters work and how to use the PspCreateObjectHandle function ourselves because the parameters are used for ObOpenObjectByPointer, which happens to be documented (and even if it wasn't, then we'd reverse that routine as well and other interesting routines used by it until we understand how it all works and links together to routines that invoke it).

I can also see that the function PspCreateObjectHandle is dependent on the ObOpenObjectByPointer routine, and using that information alone/some WinDbg debugging to enhance intelligence, I would be able to make a wrapper function of the undocumented routine which can be reliably used to replicate the functionality, especially since ObOpenObjectByPointer is exported by ntoskrnl.exe (no byte pattern scanning required).

How? Microsoft provide the symbols for their components, most/all of them I believe. I am not entirely sure however I've never ran into an issue when trying to obtain the symbols for important components like ntoskrnl.exe, lsass.exe, csrss.exe, ntdll.dll, kernel32.dll, user32.dll, sechost.dll, advapi32.dll, etc.

Another example would be Windows Defender and an undocumented kernel-mode callback called SeRegisterImageVerificationCallback. It is used by one of the Windows Defender kernel-mode device drivers and it can be leveraged for boot-time scanning I believe. It is also exported by ntoskrnl.exe which makes it even easier for a third-party to use it for the exact same purpose Microsoft themselves use it... Despite them implementing it silently for Windows 8 (or 8.1) for their own internal use, it has been exposed by researchers awhile ago and now everyone who knows how to can use it. We can see from the below screenshot some small details which are sufficient.

dWRwJS.jpg


Now we know that the routine SeRegisterImageVerificationCallback (undocumented by Microsoft) will call another routine called ExRegisterCallback. What about the parameters to the initial routine, how does it know where the callback routine setup by ourselves will be?

9Trp3F.jpg


Now we know that the third parameter, named a3 for this disassembly/analysis demo, is actually pointing to a routine. We can see its a pointer (therefore an address), and we can also see that the data required for the callback routine is also leaked... The callback routine will need three parameters. Debugging and further analysis will expose what those parameters are for exactly and how we can use them.

That only took a few minutes, with some dedicated time to enhance the investigation and some WinDbg it would be very practical to find out how to use the callback from top to bottom and make use of it for a proper feature as long as the callback is existent on the target machine.

This isn't meant to be some intro to reversing Windows (Opcode Analysis: Windows Kernel and the Naked Eye volume 1 series hahahahah) but just as an example that even though Microsoft own Windows and may add things for their own security, it doesn't stop third parties from finding out what is going on and replicating things manually, or using the same things used by Microsoft for their own benefit. The only exception I can think of would be a trademarking conflict/license agreement which bans prohibited use, but honestly I really doubt Microsoft even care that much about a third-party using things implemented originally for the use of Windows Defender.

In this very example, the callback routine introduced for Windows Defender use was exposed and can be freely used by third-parties because it is also exported which makes it a whole lot easier.

Microsoft provide the symbolic links... Can be used statically and during debugging. So even the things they want undocumented, it can be relatively easily for a majority of things for a researcher to figure it out himself with time and dedication.
---

But I do see where both you and @Andy Ful are coming from and also agree. It is hard to explain... I agree with you both at the same time but also have a different view simultaneously. So what I am saying is I don't disagree, I agree, but also have other views I can look at the situation with... But I don't fully agree with the alternate views I have, I can look at the situation from all different perspectives.... If that makes any sense? :)
Yes. It makes sense for me. Both scenarios are possible.:)
 
5

509322

@Lockdown,

I used your own explanation (3-4 percent is near zero relevance) to show I could counter your statements on Windows Defender. I like to read your post, because you have a strong opinion, but I am not going into the discussion who understands AV-testing better. Let's stick to some facts.

Fact 1- MSRT
Microsoft sees more PC's than any other security vendor in the world with their monthly malware removal tool. So it would be irrational for Windows Defender to be worse than any other Anti Virus for malware older than two weeks (two weeks = half a month = average age of malware on PC when running MSRT).

Fact 2 - Microsoft AI platform

Microsoft is one of the top three vendors in Artificial Intelligence/Machine learning toolkits. They even made some components opens source (GA Cognitive toolkit 2 for example). Microsoft build Cortana on AI technology. So it would be irrational for Micrsoft not to include some of this knowledge into the cloud backend of Windows Defender.

Fact 3 - Windows Defender is first OS aware Antivirus

The advantage of an Antivirus Behavioral detection component having access to the inner mechanisms of the OS is immense. So their data (telemetry) collection is more detailed than any other third-party Antivirus. Combined with their huge user base, cloud based reputation service and AI/ML capabilities it explains why Windows Defender improved from 60% protection to above 95% protection on samples less than two weeks old.

P.S. I am not claiming that Windows Defender is a top tier antivirus, just explaning that Microsoft using a fraction of its knowledge is capable of creating an AntivIrus performing in the middle of the pack.

The 3-4 % spread was an example of how people will argue about their favorite AVs on the forums, and the differences between them will be well within 3 or 4 %. My point was that people that understood a 3-4 % difference, practically makes no difference, wouldn't bother to debate about it on the forum.

I use Windows Defender because I have our product on the system. If you supplement Windows Defender - as Microsoft does on their Enterprise systems with AppLocker, Device Guard, start turning stuff off as Microsoft advises, etc - then Windows Defender makes for a good foundation. Use it all by itself, and it is OK protection. Hurl new stuff at it, and it ends up around 60 % or less detection. Since the labs use samples that are "representative" of what a user might encounter during their real-world computing, Windows Defender on Windows 10, based upon AV-Comparatives' test, is OK. One lab (I can't recall which) says Windows 10 Defender is an epic fail with a nice report explaining why. Like all these AV lab results, it depends upon which lab test result the reader is holding in their hands.

The tests give no true indication of new, "zero-day" malware capabilities because they are not using such malware samples in their tests. They are using samples that are supposedly "representative" of samples that people are likely to encounter in their online misadventures.

The way Windows is handling "zero-day" is default-deny via reputation lookup via SmartScreen. Microsoft's SmartScreen has serious holes because of the way Microsoft designed it.

Ai\Machine Learning is just detection by another name - AVAST, ESET, others - they have been using them for decades at this point.

I
I could only find this:
"The results are based on the test set of 316 live test cases (malicious URLs found in the field), consisting of working exploits (i.e. drive-by downloads) and URLs pointing directly to malware. Thus exactly the same infection vectors are used as a typical user would experience in everyday life."
https://www.av-comparatives.org/wp-content/uploads/2017/11/avc_factsheet2017_10.pdf
So they rather say, that this can include 0-hour and 0-day samples. But, the result for Defender in the above test was 99.1% (not 96.3 %), so maybe we talk about different tests.
.
But, please do not make me force to defend the Defender. I used BitDefender and Defender as examples. The true AI capabilities were not tested, and may have bugs (very possible for Defender). The main idea of the thread was that standard tests are missing the AI capabilities related to postinfection signatures, and this was stated very clearly.
So, let's stop talking about Defender. There is no test proof, that it is a very good AV, and there is no proof that it cannot be a very good AV. I would be cautious to believe that it is one of the best AVs.

AVAST, ESET, Bitdefender, others - they have been using Ai\Machine Learning in their backends already for decades. So, I do not know what anyone is expecting in terms of "true Ai capabilities." What you see now is what you get.

Anyone who has deep pockets, and a willingness to do so, can pay any of the labs to collect samples every day, confirm those samples are new, and test those new samples with all the AVs every day, that will show that actual detection rates for "zero-day" malware samples for most AVs is much lower than the monthly and quarterly reports being released by the very same labs today.

But anyway, the labs use samples that are "representative" of what a user "might" encounter in their real-world computing travels.

The bottom line is that the layperson does not understand AV lab tests.

Average Joe just sees the graph and percentages the lab reports or some bars with circles - without one iota of understanding of a myriad of factors and issues, corner cases, exceptions, variables, things that might influence results, errors, test methodology, how the individual AVs work, etc, that are all being reported or reflected in the test results.

Does anyone really believe that average Joe can read an AV-Comparatives test report - read the supplemental notes, explanation of test methodology - and after doing so - really understand everything behind those pretty interactive green\yellow\red bar graphs and percentages ? Or that average Joe can knowledgeably jump into an online discussion regarding bias introduced by geographic sampling or "what does this mean in their report" or what have you ?

Hence, you have "What is best AV" debates by the forum mob.
 
Last edited by a moderator:

Andy Ful

From Hard_Configurator Tools
Thread author
Verified
Honorary Member
Top Poster
Developer
Well-known
Dec 23, 2014
8,593
AVAST, ESET, Bitdefender, others - they have been using Ai\Machine Learning in their backends already for decades. So, I do not know what anyone is expecting in terms of "true Ai capabilities." What you see now is what you get.
Avast, Eset, and many others do not use AI to create real-time post-infection signatures (as far I know). They learn AI on the 'big data' (large base of malware samples), and can use AI to make pre-execution, and run-time malware signatures.
Defender, BitDefender (partially), and probably Kaspersky (KSN) can use AI to create pre-execution, run-time, and post-infection malware signatures. I used the phrase 'true AI capabilities' to underline the fact, that in the standard tests, only pre-execution and run-time protection can be measured properly. Not all AI protective capabilities (like post-infection protection) can be properly measured in the standard tests.
.
Anyone who has deep pockets, and a willingness to do so, can pay any of the labs to collect samples every day, confirm those samples are new, and test those new samples with all the AVs every day, that will show that actual detection rates for "zero-day" malware samples for most AVs is much lower than the monthly and quarterly reports being released by the very same labs today.
That is true, but any obtained result will be incorrect, like when measuring the peoples' height from the ground to the head, ignoring that some of them can jump a little. Of course, if the maximum jump will be one millimeter, the test can give a good height estimation. But in fact, we do not know how high the people will jump in the test, because no one bothered to check it.
That is simple, any result of the standard 0-day test can be incorrect for AVs which use post-infection signatures on the fly.

The bottom line is that the layperson does not understand AV lab tests.

Average Joe just sees the graph and percentages the lab reports or some bars with circles - without one iota of understanding of a myriad of factors and issues, corner cases, exceptions, variables, things that might influence results, errors, test methodology, how the individual AVs work, etc, that are all being reported or reflected in the test results.

Does anyone really believe that average Joe can read an AV-Comparatives test report - read the supplemental notes, explanation of test methodology - and after doing so - really understand everything behind those pretty interactive green\yellow\red bar graphs and percentages ? Or that average Joe can knowledgeably jump into an online discussion regarding bias introduced by geographic sampling or "what does this mean in their report" or what have you ?

Hence, you have "What is best AV" debates by the forum mob.
Yes, and also this thread is not for the average Joe.:censored:
 
Last edited by a moderator:

Windows_Security

Level 24
Verified
Honorary Member
Top Poster
Content Creator
Well-known
Mar 13, 2016
1,298
But anyway, the labs use samples that are "representative" of what a user "might" encounter in their real-world computing travels.

What is the informative value of consumer tests in non-real world conditions? Every car vendor publishes fuel consumption. Every consumer knows these values will never be met in real life (commuter driving) conditions. Every smart phone vendor publishes battery life. Every consumer knows that you have to charge your phone earlier. What is the rational of your critism against real world test conditions? In what paralel theoretical world are you planning to use an antivirus?

An antivirus is a blacklist solution, Maybe a smart blacklist solution through client based behavioal analysis and cloud based machine learming/artificia intelligence. But in essence an anti-virus is a blacklist. Blacklist solution will never be able to recognize malware what is not on the list. It is a simple as that. Fish don't have wings, birds don't have gills.
 
Last edited:

About us

  • MalwareTips is a community-driven platform providing the latest information and resources on malware and cyber threats. Our team of experienced professionals and passionate volunteers work to keep the internet safe and secure. We provide accurate, up-to-date information and strive to build a strong and supportive community dedicated to cybersecurity.

User Menu

Follow us

Follow us on Facebook or Twitter to know first about the latest cybersecurity incidents and malware threats.

Top