Andy Ful
From Hard_Configurator Tools
Thread author
Verified
Honorary Member
Top Poster
Developer
Well-known
- Dec 23, 2014
- 8,512
Let's consider the example of the initial pule of 30000 sufficiently different malware variants in the wild and the particular AV which failed to detect 100 of them.
Next, we make for the above results a trial to choose 380 samples from these 30000 and calculate the probabilities for finding in these 380 samples 0, 1, 2, or 3 undetected malware.
m=30000
n=380
k=100
As it can be easily calculated the probability to find x=0, 1, 2, 3, ... undetected malware is as follows:
p(x) = B(m-k , n-x)* B(k , x) / B(m , n)
where B(p , q) is binomial coefficient.
After some simple calculations we have:
p(x) = (m-k)! * k! * (m-n)! * n! / [x! * (k-x)! * (n-x)! *(m-k-n+x)! * m!]
Here are the results of calculations for x= 0,1,2, and 3:
p(0)=0.28
p(1)=0.36
p(2)=0.23
p(3)=0.10
These probabilities show that one particular AV can have a different number of undetected malware (0, 1, 2, 3, ...) when we preselect a smaller pule of samples from the much larger set.
We can compare these probabilities with the results of the AV-Comparatives Real-world test (July-August 2020):
4 AVs with 0 undetected malware
5 AVs with 1 undetected malware
3 AVs with 2 undetected malware
1.5 AVs with 3 undetected malware (I added 0.5 AV for Norton)
We can calculate the ratios of the probabilities and numbers of Avs for the particular numbers of undetected malware:
p(0)/p(1) = 0.77 ~ 4 AVs/5 AVs
p(0)/p(2) = 1.22 ~ 4 AVs/3 AVs
p(1)/p(2) = 1.57 ~ 5 AVs/3 Avs
p(0)/p(3) = 2.8 ~ 4 AVs/1.5 AVs
p(1)/p(3) = 3.6 ~ 5 AVs/1.5 AVs
p(2)/p(3) = 2.3 ~ 3 AVs/1.5 AVs
etc.
As we can see the AV-Comparatives test results for AVs which have 0, 1, 2, or 3 undetected malware are very close to results of the random trials for one particular AV.
It means that F-Secure, G-Data, Panda, TrendMicro, Avast, AVG, BitDefender, Avira, Eset, K7, Microsoft, and Norton could have in fact the same real number of undetected malware (100 from 30000). But anyway, they would have different numbers of undetected samples in the July_August test by pure statistics.
Is reliable the assumption of 30000 sufficiently different malware variants in the wild for two months? Yes, it is. In the first half of 2019, SonicWall Real-Time Deep Memory Inspection (RTDMI) technology unveiled 74,360 ‘never-before-seen’ malware variants (about 25000 per 2 months).
Is reliable the assumption of 100 undetected malware from 30000? Yes, it is.
This gives on average about 1 undetected malware in 380 samples.
Conclusion.
One test with 380 malware samples is not especially reliable for a period of two months.
Even if the real malware detection is the same for any two AVs they can easily score as 0 undetected malware and 2 undetected malware.
Edit.
About the impact of the greater number of ‘never-before-seen’ malware variants on calculations:
Next, we make for the above results a trial to choose 380 samples from these 30000 and calculate the probabilities for finding in these 380 samples 0, 1, 2, or 3 undetected malware.
m=30000
n=380
k=100
As it can be easily calculated the probability to find x=0, 1, 2, 3, ... undetected malware is as follows:
p(x) = B(m-k , n-x)* B(k , x) / B(m , n)
where B(p , q) is binomial coefficient.
After some simple calculations we have:
p(x) = (m-k)! * k! * (m-n)! * n! / [x! * (k-x)! * (n-x)! *(m-k-n+x)! * m!]
Here are the results of calculations for x= 0,1,2, and 3:
p(0)=0.28
p(1)=0.36
p(2)=0.23
p(3)=0.10
These probabilities show that one particular AV can have a different number of undetected malware (0, 1, 2, 3, ...) when we preselect a smaller pule of samples from the much larger set.
We can compare these probabilities with the results of the AV-Comparatives Real-world test (July-August 2020):
4 AVs with 0 undetected malware
5 AVs with 1 undetected malware
3 AVs with 2 undetected malware
1.5 AVs with 3 undetected malware (I added 0.5 AV for Norton)
Real-World Protection Test July-August 2020 - Factsheet
Take a look at the Real-World Protection July-August 2020 Factsheet, a short overview of the results, more details will in the full release!
www.av-comparatives.org
We can calculate the ratios of the probabilities and numbers of Avs for the particular numbers of undetected malware:
p(0)/p(1) = 0.77 ~ 4 AVs/5 AVs
p(0)/p(2) = 1.22 ~ 4 AVs/3 AVs
p(1)/p(2) = 1.57 ~ 5 AVs/3 Avs
p(0)/p(3) = 2.8 ~ 4 AVs/1.5 AVs
p(1)/p(3) = 3.6 ~ 5 AVs/1.5 AVs
p(2)/p(3) = 2.3 ~ 3 AVs/1.5 AVs
etc.
As we can see the AV-Comparatives test results for AVs which have 0, 1, 2, or 3 undetected malware are very close to results of the random trials for one particular AV.
It means that F-Secure, G-Data, Panda, TrendMicro, Avast, AVG, BitDefender, Avira, Eset, K7, Microsoft, and Norton could have in fact the same real number of undetected malware (100 from 30000). But anyway, they would have different numbers of undetected samples in the July_August test by pure statistics.
Is reliable the assumption of 30000 sufficiently different malware variants in the wild for two months? Yes, it is. In the first half of 2019, SonicWall Real-Time Deep Memory Inspection (RTDMI) technology unveiled 74,360 ‘never-before-seen’ malware variants (about 25000 per 2 months).
Is reliable the assumption of 100 undetected malware from 30000? Yes, it is.
This gives on average about 1 undetected malware in 380 samples.
Conclusion.
One test with 380 malware samples is not especially reliable for a period of two months.
Even if the real malware detection is the same for any two AVs they can easily score as 0 undetected malware and 2 undetected malware.
Edit.
About the impact of the greater number of ‘never-before-seen’ malware variants on calculations:
AV-Comparatives - Randomness in the AV Labs testing.
Let's consider the example of the initial pule of 30000 sufficiently different malware variants in the wild and the particular AV which failed to detect 100 of them. Next, we make for the above results a trial to choose 380 samples from these 30000 and calculate the probabilities for finding in...
malwaretips.com
Last edited: