Hot Take Sirius LLM by VoodooSoft / CyberLock

Earlier today we received noticed that the Model 2 that we use in SiriusLLM is going to be deprecated soon, so we replaced the model with the new version. I have not tested it extensively yet, but performed a few tricky tests and so far I am quite impressed. It is not quite as aggressive as the old Model 2, but from what I have seen so far, it is just about spot on.

Here is the latest version with the new Model 2.

SiriusLLM 0.62
SHA-256: b6eef46fed2a90e0e824da44994fe833c30ee061f5cdec1248e6980eefec14e8
 
Earlier today we received noticed that the Model 2 that we use in SiriusLLM is going to be deprecated soon, so we replaced the model with the new version. I have not tested it extensively yet, but performed a few tricky tests and so far I am quite impressed. It is not quite as aggressive as the old Model 2, but from what I have seen so far, it is just about spot on.

Here is the latest version with the new Model 2.

SiriusLLM 0.62
SHA-256: b6eef46fed2a90e0e824da44994fe833c30ee061f5cdec1248e6980eefec14e8
Installed/running no issue :D
 
I'm running 0.62 now, initial snapshot was aok, but had issue with manual right-click windows context scan. Did some tweaks and now it seems to be working, but have not tried to scan a portable file yet, but will soon...
 
I understand your point of view. If I posted saying that I had an executable file located on the desktop after a scan with Sirius LLM and it detected it as unsafe, why would I make all that up? :rolleyes:
I certainly didn't mean to imply that you made up the result with the file on your desktop. If I gave that impression, I sincerely apologize. I just wanted to point out that there are contradictions between your experience with Sirius LLM, the program's design, and Dan's answer that confuse me.
 
running 0.62 successfully here. I was having an issue with right-click windows context scans, especially of portable files, I think the problem was a leftover registry key from an early version Sirius, but all seems correctly updated now with 0.62. It did context scan an unsigned file as 70% confident malicious that that VT rated clean 0/72 and I advised danb with the gui interface and with an email, and he checked it. (may depend on your definition of malicious v PUA, & I did remove that app from this VM).
 
I certainly didn't mean to imply that you made up the result with the file on your desktop. If I gave that impression, I sincerely apologize.
Yes, i understand. It was nothing, (y) I think I must have gotten carried away when I took the test with Sirius, that must be it. :)
I just wanted to point out that there are contradictions between your experience with Sirius LLM, the program's design, and Dan's answer that confuse me.
Exactly, that's why I posted it. If the malware sample was running and Sirius detected it, it would be nothing new, and in that case I wouldn't have posted that post #61. @danb is correct in his statements; Sirius analyzes and detects only files or running processes. I tried to replicate the procedure I had done previously with a new sample to record a GIF or video, but without success; Sirius did not detect the sample. So, danb's statement prevails: only running processes are analyzed by Sirius LLM. ;)
 
FYI, here are the initial SiriusLLM baseline results, -100 to 100 (Not Safe to Safe).

Model 0 is the Recommended Model, Model 1 is the Aggressive Model that was updated on July 7th, and now it is not quite as aggressive.

The end result out of 1,231 samples...

Less than 10 false positives (mostly unsigned, non-prevalent files)

Less than 5 false negatives. (2 were me throwing tricky samples at it)

All other verdicts were correct.

 
I am always on the lookout for truly random, well researched samples, and I happened to stumble upon the following @Trident post, so I figured I would test: App Review - McAfee: how bad is the worst antivirus?

There were a couple of files that I could not track down, but the ones that I could, all had the correct SiriusLLM verdict... along with all correct WLC and VoodooAi verdicts. And obviously, when implemented into CyberLock or our other products, will be detected pre-execution.

HashWLC ResultVoodooAiSiriusLLM
9b757a3dbb96ff7cbea3853bdea20cbf954add2f6a2f6cebb2d0d5f0c137c0d8Not Safe98.17Model 0: -85 Model 1: -95
6981d8702172dc39f302bdeb4917c0eb49f7c37b2a90bee41f64ccecc7e9497dNot Safe99.86Model 0: -60 Model 1: -92
968396ee196be287ac6de30d897f7e84570eb5a297642a32d7300826241349bbNot Safe94.78Model 0: -85 Model 1: -92
b501e17e249221d34a618e288e0e9a75933cea9894ec11fdcd45c0663d95eeb6N/AN/AModel 0: -95 Model 1: -98
404f55e7aa854f7df700f2b93b4a31d0f13dde464e74985ca9bc98ba6224cc93Invalid file, will not executeSo analysis is skipped

At least one of the files I was unable to track down was a script. But as I was saying, SiriusLLM is truly approaching 100% efficacy when it comes to scripts / non-compiled text based files. It does great with compiled files as well, but it does especially well with non-compiled files. And there are several things we can do so that the compiled files test as well as the non-compiled files... I am trying to decide which of the 4-5 different techniques we should try first.

Anyway, here is why it would be difficult to bypass SiriusLLM with a script / non-compiled text based sample: https://www.cyberlock.global/downloads/ScriptResults.txt

BTW, if the script is so obfuscated that it cannot be read, SiriusLLM should mark the file as Not Safe as well. If anyone ever finds a malicious script / text file that can bypass SiriusLLM, please let me know!
 
Hi @danb

I wanted to get a fresh copy of SiriusLLM so logged onto your website - https://cyberlock.global/ but it is showing as down, I also tried https://voodooshield.com/ both show a "HTTP Error 503. The service is unavailable." this message is backed up by website Is It Down Right Now.

I also tried to do a version check with CyberLock it came back with the following message.
An internet connection was not detected.

Looks like your sites might be having a problem.
 
Hi @danb

I wanted to get a fresh copy of SiriusLLM so logged onto your website - CyberLock - Automated and Effortless Zero-Trust Endpoint Protection but it is showing as down, I also tried CyberLock - Automated and Effortless Zero-Trust Endpoint Protection both show a "HTTP Error 503. The service is unavailable." this message is backed up by website Is It Down Right Now.

I also tried to do a version check with CyberLock it came back with the following message.
An internet connection was not detected.

Looks like your sites might be having a problem.
Thank you, we had some pretty bad storms last night, but we are up and running now!
 
Where do I find logs if any? I got 3 warnings for the PLEX TV desktop app... sorry I immediately nuked the app before remembering devs might be interested so didnt even get a scr shot. Here is the link to the installer...

As of now there is no logging, but we will probably add some logging for bugs, but we will probably not add logging for the full prompt / response... that would be a massive log.

But I downloaded and installed Plex from your link. The installer and every executable that was installed all tested at 95% Safe or above on Model 1 and I tested several on Model 2 and they were all 95% Safe or above as well. I am guessing that maybe there was an uncommon plugin or something that you are using with Plex that had a Not Safe verdict? I am not too familiar with Plex, but I think you can add plugins... this is just a guess. I wouldn't worry about alerting the Devs... everything tested Safe ;).
 
Hi @danb

I wanted to get a fresh copy of SiriusLLM so logged onto your website - CyberLock - Automated and Effortless Zero-Trust Endpoint Protection but it is showing as down, I also tried CyberLock - Automated and Effortless Zero-Trust Endpoint Protection both show a "HTTP Error 503. The service is unavailable." this message is backed up by website Is It Down Right Now.

I also tried to do a version check with CyberLock it came back with the following message.
An internet connection was not detected.

Looks like your sites might be having a problem.
We had bad storms last night, so I figured that was the cause of our outage. Then a few hours later our main server went down again. It turned out to be a DNS Cache Flooding attack, but it is fixed now. I also talked to the guys at the data center and they are going to block these attacks at the routers as well.

BTW, I have been working on SiriusGPT, which is basically SiriusLLM, but with the kernel mode driver and it will function like CyberLock on AutoPilot. It will be a really great super lightweight addition for anyone running a traditional AV, but does not want to go full zero-trust. It should be ready in 2-3 days, then Shadowra is going to test it, then we will post it. Thank you guys!
 
As of now there is no logging, but we will probably add some logging for bugs, but we will probably not add logging for the full prompt / response... that would be a massive log.

But I downloaded and installed Plex from your link. The installer and every executable that was installed all tested at 95% Safe or above on Model 1 and I tested several on Model 2 and they were all 95% Safe or above as well. I am guessing that maybe there was an uncommon plugin or something that you are using with Plex that had a Not Safe verdict? I am not too familiar with Plex, but I think you can add plugins... this is just a guess. I wouldn't worry about alerting the Devs... everything tested Safe ;).
Maybe its because I extracted installer to portable folder instead of normal install? Anyways, thanks for your reply.
 
  • +Reputation
Reactions: simmerskool
I saw an odd interaction (to me) between Sirius 0.62 and WLC (CL 8.02). I have an astronomy app open to check some data, and Sirius was doing 1 hr snapshot scans. In my WLC this app is whitelisted, and Sirius during its scan marked it malicious because it Sirius looked at WLC which called it malicious even though whitelisted in WLC. So I had Sirius re-analyze the app and it correctly reported "safe" 85% -- mentioning that it is an older app and has all earmarks of a legit app. BUT then after its re-analysis Sirius did its hourly snapscan of running apps and again reported malicious because of WLC. So this time I whitelisted the app in the Sirius dropdown. So some of the cross-talk comm between Sirius & WLC seems out of sync (to me). @danb if the need the astro app in question, I'll email you the info. This activity was on the hardware win10.
 
  • Like
Reactions: Digmor Crusher
and I saw 2d sirius flip flop decision 60% confidence. Sirius did an automatic snapshot scan and rated a running different app (not the astronomy app) as potentially malicious 60%. The app is unsigned, and I have it whitelisted in WLC. So I ran a manual reanalyze of the exe with the app closed and Sirius rated it safe 60%. So I then re-open the app and ran a snapshot scan expecting to see now it is safe, but sirius again reported it potential malicious 60% confidence. Sirius settings are default.
 
Integrating large language models (which I am assuming is the OpenAI api, or Gemini Flash API, both of which can accept custom tuning) is a good idea but more important than the API is the feature set. In post 3 I see the LLM considering quite a few factors, but a high quality static analysis engine extracts several thousand features usually. Nevertheless, it’s a interesting project.

Also, I am assuming @danb that you’ve tuned the API with a large number of high-quality pre-checked and pre-labelled samples (ideally 1:1 malicious/safe)?

I also suggest you avoid personalities and humorous answers.
I know that it adds certain signature that I like myself, but security is not the place for giggles and laughter.

Displaying detailed reasons why files are flagged (way too detailed) to a program that offers trial versions downloaded with 2 clicks, is also a recipe for disaster, it’s yet another way to tell attackers how to evade detection.
Maybe do it just in the enterprise version (requiring enterprise email) and for the home version display vague and generic information.

Last but not least, since you are making use of APIs on your way to a full blown AV (I don’t blame you, I am using these APis too more and more), why don’t you make them return a detection name too?

Trojan.Generic_MalScript
Trojan.Generic_MalPE
Or maybe Malicious:Confidence=“80%”

And so on and so on? Give it that little AV touch and feeling.
You can execute this in the frontend side too as well, based on the api response.
 
Last edited:
and I saw 2d sirius flip flop decision 60% confidence. Sirius did an automatic snapshot scan and rated a running different app (not the astronomy app) as potentially malicious 60%. The app is unsigned, and I have it whitelisted in WLC. So I ran a manual reanalyze of the exe with the app closed and Sirius rated it safe 60%. So I then re-open the app and ran a snapshot scan expecting to see now it is safe, but sirius again reported it potential malicious 60% confidence. Sirius settings are default.
We talked through email, but so everyone can see, here is my response.

Yeah, I am still playing around with the ranges on the results / verdicts, mainly when the confidence is 60% or less. The main reason is because when I was testing some malware, I noticed that when the verdict reached as low as the 60% confidence range, then the verdict can be wrong. So out of an abundance of caution, some of the 60% Safe confidence verdicts are changed to 60% Not Safe (depending on other factors like the WLC result, valid signature, VoodooAi, etc. I have implemented this on the manual scan, but have not implemented this on the Snapshot Scan. So yeah, this is expected, and I am still playing with the code to get it just right.

In short, 60% Safe confidence in not acceptable... I mean, would you execute arbitrary code on your machine if you were only 60% sure it was safe? I wouldn't ;). So as a quick fix, if the verdict is 60% or less, then we called it Not Safe. In the SiriusGPT version that is almost ready, we have a better / more permanent fix, which is if the verdict is 60% or less confident, then we will automatically scan the file with the second / other model and update the verdict based on its verdict.
 
Integrating large language models (which I am assuming is the OpenAI api, or Gemini Flash API, both of which can accept custom tuning) is a good idea but more important than the API is the feature set. In post 3 I see the LLM considering quite a few factors, but a high quality static analysis engine extracts several thousand features usually. Nevertheless, it’s a interesting project.

Also, I am assuming @danb that you’ve tuned the API with a large number of high-quality pre-checked and pre-labelled samples (ideally 1:1 malicious/safe)?

I also suggest you avoid personalities and humorous answers.
I know that it adds certain signature that I like myself, but security is not the place for giggles and laughter.

Displaying detailed reasons why files are flagged (way too detailed) to a program that offers trial versions downloaded with 2 clicks, is also a recipe for disaster, it’s yet another way to tell attackers how to evade detection.
Maybe do it just in the enterprise version (requiring enterprise email) and for the home version display vague and generic information.

Last but not least, since you are making use of APIs on your way to a full blown AV (I don’t blame you, I am using these APis too more and more), why don’t you make them return a detection name too?

Trojan.Generic_MalScript
Trojan.Generic_MalPE
Or maybe Malicious:Confidence=“80%”

And so on and so on? Give it that little AV touch and feeling.
You can execute this in the frontend side too as well, based on the api response.
I love the idea of assigning a detection name, thank you for the suggestion, I will implement that asap. I need to be careful what I say because there is some pretty unique stuff going on with SiriusGPT, mainly on the backend.

But yes, Sirius utilizes mainly static analysis, with a touch of what I call "lite dynamic analysis". For years I have cautioned against full sandbox analysis (even when we had Cuckoo Sandbox as an option). Whenever I test our products with ANY / ALL of the sandboxes, the report lists tons of indicators that simply do not exist in our software. For example, I saw one recently where Windows Update ran during the sandbox analysis, and it somehow thought our software triggered Windows Update, and it called this a malicious indicator. Any developer who analyzes their software with any of the sandboxes will agree... they make tons of mistakes and are not nearly as accurate as people make them out to be. And it would be super easy to include anti-sandboxing methods in malware that would easily trick the sandbox (yes, without them accurately detecting these methods). Not to mention, why even sandbox the file if you are going to execute the file unsandboxed at some point anyway? But more importantly, every full sandbox analysis that I have seen is riddled with inaccuracies and false indicators.

I have been pondering Sirius for 2 or so years now, ever since ChatGPT was first released, and weighing the benefits of static vs dynamic analysis. I mentioned earlier that Sirius is like a malware researcher / analyst that sits on your shoulder, and helps to figure out if files are safe or not. I myself am not a great malware analyst, but I can take a quick look at most files and tell you very quickly if the file is safe, without even executing the file. And expert malware analysts are much better than I am at quickly determining if a file is safe or not, very quickly. Obviously there are edge cases that are very tricky, but for most files, it is quite easy. This is one of the things that I have been considering the last 2 years... how do I create an LLM that can perform malware analysis like an expert malware analyst? It dawned on me that just by statically examining the strings, imports, exports, signature, etc., and analyzing these features with an LLM, that would get us to 99% of where we need to be. It took quit a bit of tweaking, and I have to say I was lucky and guessed correctly on several critical stages of development, but once I saw the end result, I was truly astonished. And as I was saying... this is baseline, and it is only going to get better from here.

So yes, we have some lite dynamic analysis, but a lot of the verdict is derived from static analysis, for the reasons I explained above.

You mentioned "Displaying detailed reasons why files are flagged (way too detailed) to a program that offers trial versions downloaded with 2 clicks, is also a recipe for disaster, it’s yet another way to tell attackers how to evade detection."... and that is a valid point. But in all fairness, attackers have access to the same tools and exploit databases that we do, and they are already utilizing LLM's to create malware. Ultimately, it is going to be the battle of the AIs. But just keep in mind, that when an attacker reads a SiriusGPT prompt and adjusts their code accordingly, it severely limits what they can do when developing malware ;). All of a sudden they have created a user-friendly app that is perfectly safe and useful ;).

Thanks again for the suggestion, I think it will be super cool!
 

You may also like...