Question is this Legal ?

Please provide comments and solutions that are helpful to the author of this topic.

The-Unknown-Sal

Level 1
Thread author
Aug 9, 2025
16
22
26
Hey all,


I’ve been diving deep into the Roblox platform’s kernel components, especially the RobloxPlayerBeta.dll driver, and uncovered some seriously troubling stuff. Need the community’s take on this.


What I found:


  • Behaves like a stealth kernel-mode rootkit with runtime self-decryption, AVX/SIMD memory tricks, and dynamic PE remapping.
  • Abuses DigiCert trusted certificates to bypass Windows kernel-mode signing enforcement.
  • Uses advanced anti-debugging and anti-VM entropy checks to dodge analysis.
  • Attempts to overwrite critical kernel memory, likely triggering PatchGuard violations that can brick Windows installs.
  • Collects aggressive telemetry targeting hardware IDs, user activity, and debugger presence without clear user consent.

This goes well beyond typical anti-cheat software and risks system stability and user privacy.


My question:
Is deploying a driver with stealth, kernel memory manipulation, and aggressive telemetry legal under current laws? Especially when it runs silently, without explicit user consent, and can potentially break the OS?


I’m trying to understand if this crosses legal boundaries or if such aggressive tactics are “normal” in the industry.


Any insights, legal references, or similar experiences would be hugely appreciated.


Thanks!


PS: The driver calls WriteFile on critical system DLLs like kernel32.dll, writing zeroed data — behavior that looks like deliberate corruption designed to trip PatchGuard and cause crashes. This isn’t just shady; it’s borderline destructive malware, not your typical anti-cheat.
Also If you want more Details Here is a link to GistGithub

Made by me
 
Hey all,


I’ve been diving deep into the Roblox platform’s kernel components, especially the RobloxPlayerBeta.dll driver, and uncovered some seriously troubling stuff. Need the community’s take on this.


What I found:


  • Behaves like a stealth kernel-mode rootkit with runtime self-decryption, AVX/SIMD memory tricks, and dynamic PE remapping.
  • Abuses DigiCert trusted certificates to bypass Windows kernel-mode signing enforcement.
  • Uses advanced anti-debugging and anti-VM entropy checks to dodge analysis.
  • Attempts to overwrite critical kernel memory, likely triggering PatchGuard violations that can brick Windows installs.
  • Collects aggressive telemetry targeting hardware IDs, user activity, and debugger presence without clear user consent.

This goes well beyond typical anti-cheat software and risks system stability and user privacy.


My question:
Is deploying a driver with stealth, kernel memory manipulation, and aggressive telemetry legal under current laws? Especially when it runs silently, without explicit user consent, and can potentially break the OS?


I’m trying to understand if this crosses legal boundaries or if such aggressive tactics are “normal” in the industry.


Any insights, legal references, or similar experiences would be hugely appreciated.


Thanks!


PS: The driver calls WriteFile on critical system DLLs like kernel32.dll, writing zeroed data — behavior that looks like deliberate corruption designed to trip PatchGuard and cause crashes. This isn’t just shady; it’s borderline destructive malware, not your typical anti-cheat.
Also If you want more Details Here is a link to GistGithub

Made by me
There is no universal "yes" or "no" answer. While most EULAs are written to give companies broad permissions, these permissions can be challenged under various privacy and computer access laws. A court case would likely come down to the specific actions of the dll, the clarity of the EULA, and the privacy laws of the jurisdiction in question.
 
Last edited:
There is no universal "yes" or "no" answer. While most EULAs are written to give companies broad permissions, these permissions can be challenged under various privacy and computer access laws. A court case would likely come down to the specific actions of the dll, the clarity of the EULA, and the privacy laws of the jurisdiction in question.
Hey, appreciate the thoughtful response! Just to clarify no ToS or EULA can outright grant a company a free pass to break laws. They can try to cover their backs with broad language, but legal boundaries like privacy laws, anti-tampering statutes, and computer misuse laws still apply regardless of what the ToS says.

In other words, agreeing to a ToS doesn’t mean users waive their rights or give permission for illegal actions like unauthorized kernel-level manipulation, data theft, or causing system instability. Courts can and do strike down terms that violate fundamental laws or public policy. So, while ToS might provide some cover, it’s definitely not a legal “get out of jail free” card.

Hope that clears things up!
 
You are referring to Roblox’s Hyperion Driver which was developed by a company called Byfron.

It appears to be either a user mode driver developed under the UMDF or it could be providing some COM interfaces to a kernel-mode driver which is outside of the scope of your analysis.

The operations and behaviour are normal in the sense of anti-cheat systems. If you analyse other anti-cheat systems such as PUBG, Easy Anti-Cheat, Vanguard and so on, you will find similar behaviour.

Is this legal, yes, unless the driver intentionally causes damage or the collected information is misused in any way, there is nothing illegal.

Before Roblox releases this driver and puts their digital signature on it, they’ve most consulted legal teams what they can and can’t do.
The EULA likely contains information about this behaviour and by proceeding to installation/usage of the platform, users consent to it.
 
You are referring to Roblox’s Hyperion Driver which was developed by a company called Byfron.

It appears to be either a user mode driver developed under the UMDF or it could be providing some COM interfaces to a kernel-mode driver which is outside of the scope of your analysis.

The operations and behaviour are normal in the sense of anti-cheat systems. If you analyse other anti-cheat systems such as PUBG, Easy Anti-Cheat, Vanguard and so on, you will find similar behaviour.

Is this legal, yes, unless the driver intentionally causes damage or the collected information is misused in any way, there is nothing illegal.

Before Roblox releases this driver and puts their digital signature on it, they’ve most consulted legal teams what they can and can’t do.
The EULA likely contains information about this behaviour and by proceeding to installation/usage of the platform, users consent to it.
Thanks for your reply,

So look, most kernel anti-cheats do hook into the kernel at the deepest level possible, but they don’t go as far as to trip PatchGuard. If they detect a VM, most of them won’t even run in the first place.


But this one does something reckless it writes zeros to kernel32.dll to trip PatchGuard, instead of just not running, which wastes system resources and can cause system instability.

Their TOS or EULA does not state that they will do such a thing, which is why I’m asking if this is legal in any way.

From what I know, this seems not only like spyware because it scans your kernel for virtual drivers, but also like a kind of rootkit since it messes with kernel drivers or processes. It even uses a function called NtTerminateProcess to end security processes like Windows Defender.

Also, it was heavily obfuscated and encrypted. It took me time to decompile and see what was inside.
 
Last edited:
Is this legal, yes, unless the driver intentionally causes damage or the collected information is misused in any way, there is nothing illegal.

Before Roblox releases this driver and puts their digital signature on it, they’ve most consulted legal teams what they can and can’t do.
The EULA likely contains information about this behaviour and by proceeding to installation/usage of the platform, users consent to it.
1754744418622.png
 
Is this legal, yes, unless the driver intentionally causes damage or the collected information is misused in any way, there is nothing illegal.

Before Roblox releases this driver and puts their digital signature on it, they’ve most consulted legal teams what they can and can’t do.
The EULA likely contains information about this behaviour and by proceeding to installation/usage of the platform, users consent to it.
These are part of a legal defense strategy, not an absolute guarantee of legality. The final say on what is legal always rests with the courts and the specific laws of the jurisdiction.
 
These are part of a legal defense strategy, not an absolute guarantee of legality. The final say on what is legal always rests with the courts and the specific laws of the jurisdiction.
True, but most individuals are not willing to assume the legal costs - a universally formidable deterrent - to pursue a case for a "legal" or "not legal" determination.

When working with attorneys, if they say something is legal then one has to assume that it is - assuming that the attorneys are professionally competent. That's the entire point of have a difficult education, practice, and licensure scheme to be admitted to the Bar.

Nevertheless all entities are risk gamblers to one extent or another. So, sure, there can be illegal stuff with a EULA meant to mitigate the risks. However, it is globally established that if software does not inflict an actual harm, then it is legal. The central matter is what is defined as a "harm" from jurisdiction to jurisdiction.

Whether or not the "harm" remedy is governed by regulatory, civil, or criminal law is based upon a number of variables and jurisdiction.
 
So look, most kernel anti-cheats do hook into the kernel at the deepest level possible, but they don’t go as far as to trip PatchGuard. If they detect a VM, most of them won’t even run in the first place.
The VM protections are usually part of a broader, anti-reverse-engineering tactic. An anti-cheat system is not useful if it can be reverse-engineered and then the cheat can go around it.
Roblox will not discuss every teeny-tiny technical detail in the EULA, if someone is curious they can check the patents, where even the smallest mathematical formula will be detailed.
How the system operates is not information for the general masses.
But this one does something reckless it writes zeros to kernel32.dll to trip PatchGuard, instead of just not running, which wastes system resources and can cause system instability.
I am not sure what's Roblox's problem with PatchGuard, Roblox can answer the question better. When the system was engineered and tested, something came up that requited this to be done. If users are concerned about it, they can uninstall Roblox.
From what I know, this seems not only like spyware because it scans your kernel for virtual drivers, but also like a kind of rootkit since it messes with kernel drivers or processes. It even uses a function called NtTerminateProcess to end security processes like Windows Defender.
The definition of a rootkit is malware that enters kernel mode to bypass a full stack of defences. Roblox is not involved in the distribution of malware, hence conditions for calling this a rootkit are not met. NtTerminateProcess, this can be used for anything and everything, it can be used to terminate any processes that are related to cheats. Furthermore, NtTerminateProcess (not ZwTerminateProcess) is highly unlikely to kill security tools. In essence, the sandbox is alerting to behaviour that could be malicious in other context, but not in the context of this anti-cheat system.

Sandboxes are not universal "pills" in malware analysis, hence they are not deployed in the same manner when software has to "pull the plug" on files.

They require deep knowledge and understanding how software works. Be it malicious or safe.
 
True, but most individuals are not willing to assume the legal costs - a universally formidable deterrent - to pursue a case for a "legal" or "not legal" determination.

When working with attorneys, if they say something is legal then one has to assume that it is - assuming that the attorneys are professionally competent. That's the entire point of have a difficult education, practice, and licensure scheme to be admitted to the Bar.

Nevertheless all entities are risk gamblers to one extent or another. So, sure, there can be illegal stuff with a EULA meant to mitigate the risks. However, it is globally established that if software does not inflict an actual harm, then it is legal. The central matter is what is defined as a "harm" from jurisdiction to jurisdiction.

Whether or not the "harm" remedy is governed by regulatory, civil, or criminal law is based upon a number of variables and jurisdiction.
That's a very insightful point about the practical realities of the legal system. You're absolutely right that the high cost of litigation is a major deterrent for individuals, and companies rely on the professional judgment of their competent legal teams to navigate these risks.

However, it's crucial to remember that a lawyer's opinion is a professional assessment of risk, not an absolute guarantee of legality. The ultimate "legal" or "not legal" determination rests with a court, which can find certain clauses in an End-User License Agreement (EULA) to be unenforceable, even if a legal team drafted them. Furthermore, the definition of "harm" is much broader in a legal context than it might seem. It can include violations of privacy rights, breaches of contract, or non-compliance with specific regulations—all of which can be considered legal "harm" even without any obvious physical damage or financial loss to the user.

So, while your points about the practical side of law are spot on, the underlying legal truth is that a company's actions aren't de facto legal just because their lawyers say so; they're simply operating within a framework of calculated risk.
 
The VM protections are usually part of a broader, anti-reverse-engineering tactic. An anti-cheat system is not useful if it can be reverse-engineered and then the cheat can go around it.
Roblox will not discuss every teeny-tiny technical detail in the EULA, if someone is curious they can check the patents, where even the smallest mathematical formula will be detailed.
How the system operates is not information for the general masses.

I am not sure what's Roblox's problem with PatchGuard, Roblox can answer the question better. When the system was engineered and tested, something came up that requited this to be done. If users are concerned about it, they can uninstall Roblox.

The definition of a rootkit is malware that enters kernel mode to bypass a full stack of defences. Roblox is not involved in the distribution of malware, hence conditions for calling this a rootkit are not met. NtTerminateProcess, this can be used for anything and everything, it can be used to terminate any processes that are related to cheats. Furthermore, NtTerminateProcess (not ZwTerminateProcess) is highly unlikely to kill security tools. In essence, the sandbox is alerting to behaviour that could be malicious in other context, but not in the context of this anti-cheat system.

Sandboxes are not universal "pills" in malware analysis, hence they are not deployed in the same manner when software has to "pull the plug" on files.

They require deep knowledge and understanding how software works. Be it malicious or safe.
So what you are saying is “just anti-cheat” and nothing to worry about I disagree.


First, if this system trips PatchGuard to intentionally trigger a BSOD when a VM is detected, that is already intrusive behavior. PatchGuard exists to protect the kernel from exactly this kind of manipulation.

Second, if it is capable of writing to kernel32.dll (or any other core system DLL) by bypassing PatchGuard’s protections, it is operating in the same space and using similar techniques as actual malware. The method doesn’t need to be malicious today the fact it can do it at all is a red flag.

We also have to factor in the audience: games like Roblox have a huge minor-dominated player base. Many of them may try to cheat without understanding that the anti-cheat or background modules could be doing things far riskier than just blocking exploits.

From a security standpoint, this means:
  1. Possible PatchGuard evasion — could be abused for malicious code injection.
  2. Privilege escalation — writing to protected system DLLs is a kernel-level action.
  3. User trust implications — minors and non-technical users cannot assess these risks themselves.
Oh also
In user mode, NtTerminateProcess is just the raw system call stub in ntdll.dll that jumps into the kernel. It still goes through normal Windows security checks meaning if it’s running as an unprivileged process, it can’t kill protected or higher-privileged processes.
However
It’s being invoked from within their kernel driver context
Even if it’s named NtTerminateProcess in the disassembly, it’s still executing with kernel privileges, so it can terminate anything, including AV/EDR.


I’m requesting peer review from those with kernel security expertise to confirm whether these behaviors constitute unacceptable security risk and, if so, whether regulatory attention is warranted.
Thank you for your time
Side note: If you want to i can Share the Decompiled code that i backed up
i would like your review on it
 
Last edited:
So, while your points about the practical side of law are spot on, the underlying legal truth is that a company's actions aren't de facto legal just because their lawyers say so; they're simply operating within a framework of calculated risk.
That's societal reality. Innit? Then there's the case where a jury could have a finding of "legal" while a different jury made an "illegal" determination. One judge says "Yea," another says "Nay."

For the most part, what is legal versus illegal is well established - or more accurately - widely accepted within a jurisdiction. Even globally, with things that cover commerce, contracts, and so on.

Most EULAs are written in a manner that it is very rare for any of their sections to be voided. To be perfectly honest, that is due almost exclusively to the fact that there are few entities that seek a legal remedy for software related issues. Now it is a common practice to include unenforceable language in EULAs, but often that is done because the publisher wants a single, unified EULA to cover any global market. So they are risk gambling and will find out one way or the other when that user with deep pockets files a lawsuit and it goes all the way to a judge's or jury's determination. Unfortunately for the world, there's not enough angry people with deep pockets willing to litigate.

All EULAs are worded in a way to undermine any potential enforceability to the end user's benefit; all the words therein are deliberately written to insulate the publisher from enforcement actions or liabilities. The probabilities and realities of the language universally favor the software publisher. Just think about all the sketchy, underhanded stuff that large tech companies get away with globally (in virtually all markets).

The EU regulatory framework and enforcement mechanisms are a way to go to protect users, but even then it is not adequate. Under the law, tech companies are permitted to get away with far too many things that should be prohibited.
 
We also have to factor in the audience: games like Roblox have a huge minor-dominated player base. Many of them may try to cheat without understanding that the anti-cheat or background modules could be doing things far riskier than just blocking exploits.
Well, they should not be cheating, that's the whole point of having this anti-cheat system in place. You cheat, you suffer and then you don't cheat again. Though these practices are highly questionable and very aggressive, usage of cheats is hurting the entire community, mainly players that are out there not using cheats. So Roblox is taking measures to protect the communities.
Second, if it is capable of writing to kernel32.dll (or any other core system DLL) by bypassing PatchGuard’s protections, it is operating in the same space and using similar techniques as actual malware. The method doesn’t need to be malicious today the fact it can do it at all is a red flag.
Actual malware rarely operates in kernel mode, unless a BYOVD is involved. Malware operates in user mode mostly.
  1. Possible PatchGuard evasion — could be abused for malicious code injection.
  2. Privilege escalation — writing to protected system DLLs is a kernel-level action.
  3. User trust implications — minors and non-technical users cannot assess these risks themselves.
Yes, the Roblox drivers can be abused. Same goes for any other driver, even Avast, Zemana and Microsoft Defender drivers have been abused in attacks. Everything that's used can also be abused. If the driver is abused, then the digital signature will be invalidated and Roblox will have to purchase another one, they will be responsible for fixing their driver too.

The driver should be designed with security in mind, but even if it's not, that's still not illegal. Programming is not something set in stone and there aren't many laws (any country or jurisdiction) that set restrictions on how developers should operate. Please see @Divergent and @bazang posts, they are discussing the legal point of view.
 
I’m requesting peer review from those with kernel security expertise to confirm whether these behaviors constitute unacceptable security risk and, if so, whether regulatory attention is warranted.
Perform an in-depth forensic analysis of the software, document the findings, make an argument regarding the legality therein, and then submit it to the regulatory institutions within your jurisdiction.

That's a very expensive endeavor and might require you to crowd-skill and crowd-fund the entire process. For one, a successful regulatory complaint will require consulting an attorney, if not attorneys, which the second hand of the clock goes "Cha-Ching" with each second.

You can contact the Electronic Frontier Foundation for guidance.
 
Well, they should not be cheating, that's the whole point of having this anti-cheat system in place. You cheat, you suffer and then you don't cheat again. Though these practices are highly questionable and very aggressive, usage of cheats is hurting the entire community, mainly players that are out there not using cheats. So Roblox is taking measures to protect the communities.

Actual malware rarely operates in kernel mode, unless a BYOVD is involved. Malware operates in user mode mostly.

Yes, the Roblox drivers can be abused. Same goes for any other driver, even Avast, Zemana and Microsoft Defender drivers have been abused in attacks. Everything that's used can also be abused. If the driver is abused, then the digital signature will be invalidated and Roblox will have to purchase another one, they will be responsible for fixing their driver too.

The driver should be designed with security in mind, but even if it's not, that's still not illegal. Programming is not something set in stone and there aren't many laws (any country or jurisdiction) that set restrictions on how developers should operate. Please see @Divergent and @bazang posts, they are discussing the legal point of view.

I agree that players shouldn’t be cheating that’s obvious.
However, the way Roblox is approaching this still ignores proportionality and consent, which are critical both legally and ethically.


"Malware rarely operates in kernel mode"
This is misleading. Yes, mass-market commodity malware usually stays in user-mode because it’s easier and requires fewer privileges. But targeted malware, advanced persistent threats, and rootkits absolutely do go kernel-mode. That’s because kernel-mode lets you bypass multiple security measures including PatchGuard, antivirus, and even OS-level protections — and achieve deep persistence.


BYOVD (Bring Your Own Vulnerable Driver) isn’t rare anymore it’s practically the standard for advanced malware families today. For example, Lazarus Group, BlackLotus, and Slingshot all leveraged kernel-level code for stealth and persistence. When Roblox runs a kernel driver, it’s operating in the exact same privilege space as these advanced threats regardless of its stated intent.

"Drivers can be abused but that’s not illegal" / "Everyone uses it, so it’s fine"
This logic is flawed. There’s a huge difference between:
  • A signed Microsoft driver, following documented APIs and subject to predictable, auditable behavior.
  • A heavily-obfuscated, unsigned, or hidden-in-installer driver engaging in undocumented kernel manipulation.

Legality ≠ Safety.
Plenty of things are technically legal yet completely unsafe or hostile to the user. Even if the intended use case is legitimate, the capabilities are indistinguishable from malware techniques. That is unacceptable in any serious security review — especially for a product with a massive user base of minors.
 
Legality ≠ Safety.
Plenty of things are technically legal yet completely unsafe or hostile to the user. Even if the intended use case is legitimate, the capabilities are indistinguishable from malware techniques. That is unacceptable in any serious security review — especially for a product with a massive user base of minors.
You are describing Windows, Linux, Android, Apple iOS, etc.

Safety (physical, digital) only matters when there is a standard of safety that must be achieved in a product. Getting that safety standard adopted as a matter of societal policy is the challenge. Even then, policy is not the same as practice.

There have been ongoing campaigns for decades to make software more safe (more secure, safer to use, providing more robust protections, privacy) in day-to-day practice. Users primarily - and the impracticality of regulatory systems when it comes to regulating software - are responsible for the little progress made to date.

Policies of "Don't do that" or "These things are not allowed" would be impossible to impose upon and enforce within the software development community. What government is going to enact, enable, and then deploy the "Software Code Police"? Failed effort from the start if it involves people as the inspectors and enforcers. Potentially achievable if AI is used to inspect code; all code must be submitted for review. Then the global software publisher industry would take rifles and pitchforks to the streets, crying out "Safe Code N**is! Killing innovation! Adding overhead expense!"
 
"Malware rarely operates in kernel mode"
This is misleading. Yes, mass-market commodity malware usually stays in user-mode because it’s easier and requires fewer privileges. But targeted malware, advanced persistent threats, and rootkits absolutely do go kernel-mode. That’s because kernel-mode lets you bypass multiple security measures including PatchGuard, antivirus, and even OS-level protections — and achieve deep persistence.
How many pieces of malware are released daily?
According to Kaspersky, 170 million pieces of malware are released daily.

Of them how many enter kernel mode?
Exact reports are not known, but a report from 2020 found at least 620 malicious drivers in circulation.
BYOVD (Bring Your Own Vulnerable Driver) isn’t rare anymore it’s practically the standard for advanced malware families today. For example, Lazarus Group, BlackLotus, and Slingshot all leveraged kernel-level code for stealth and persistence. When Roblox runs a kernel driver, it’s operating in the exact same privilege space as these advanced threats regardless of its stated intent.
The attacks from these groups are highly targeted and constitute for "drop in the ocean" of attacks.
Yes, Roblox is operating at kernel level - there are many programmes that operate at the same level, for example, some AVs (won't name and shame) use 10-15 kernel mode drivers. In total, a relatively lean Windows installation has at the very least 60-70 kernel mode drivers. This is not the developers' fault, Microsoft allows the installation of these drivers, software developers use the feature provided by Microsoft. Apple long time ago kicked everyone out of the kernel space and only allows user mode extensions.

Malware also happens to operate in kernel mode in some instances. But not everything operating in kernel mode is malware.
"Drivers can be abused but that’s not illegal" / "Everyone uses it, so it’s fine"
This logic is flawed. There’s a huge difference between:
  • A signed Microsoft driver, following documented APIs and subject to predictable, auditable behavior.
  • A heavily-obfuscated, unsigned, or hidden-in-installer driver engaging in undocumented kernel manipulation.
I'm glad you can make good use of AI, but when chatting to AI, you need to provide all information and parameters to get an accurate response.
In this case you have no heavily-obfuscated, unsigned or hidden driver. You have a driver which is released in accordance with all best practices and as part of a legitimate application.
 
You are describing Windows, Linux, Android, Apple iOS, etc.

Safety (physical, digital) only matters when there is a standard of safety that must be achieved in a product. Getting that safety standard adopted as a matter of societal policy is the challenge. Even then, policy is not the same as practice.

There have been ongoing campaigns for decades to make software more safe (more secure, safer to use, providing more robust protections, privacy) in day-to-day practice. Users primarily - and the impracticality of regulatory systems when it comes to regulating software - are responsible for the little progress made to date.

Policies of "Don't do that" or "These things are not allowed" would be impossible to impose upon and enforce within the software development community. What government is going to enact, enable, and then deploy the "Software Code Police"? Failed effort from the start if it involves people as the inspectors and enforcers. Potentially achievable if AI is used to inspect code; all code must be submitted for review. Then the global software publisher industry would take rifles and pitchforks to the streets, crying out "Safe Code N**is! Killing innovation! Adding overhead expense!"
I know regulating software is a massive challenge, especially across crowded platforms and millions of developers worldwide. But when software has the power to disrupt or manipulate kernel memory, potentially trip PatchGuard, and cause system instability, I see it crosses every line it can’t just be masked as “innovation.”


We’re not talking about general-purpose apps here; we’re talking about rootkit-level capabilities hidden inside software marketed to millions many of whom, like 80%, are minors. This has to have a standard or at least legal accountability for software that messes with core OS functions in aggressive ways.


This isn’t just a technical challenge it’s a consumer safety and privacy issue.

I’m not advocating for “Safe Code N**is,” and you can’t compare a driver to N**is. But some level of transparency and enforceable policy seems necessary when risks get this high.

The status quo might be unbalanced or imperfect, but that doesn’t mean we shouldn’t push for better or improved versions — especially when users are young and often completely unaware of the risks they’re exposed to.
 
  • Hundred Points
Reactions: Fan-of-spyshelter
How many pieces of malware are released daily?
According to Kaspersky, 170 million pieces of malware are released daily.

Of them how many enter kernel mode?
Exact reports are not known, but a report from 2020 found at least 620 malicious drivers in circulation.

The attacks from these groups are highly targeted and constitute for "drop in the ocean" of attacks.
Yes, Roblox is operating at kernel level - there are many programmes that operate at the same level, for example, some AVs (won't name and shame) use 10-15 kernel mode drivers. In total, a relatively lean Windows installation has at the very least 60-70 kernel mode drivers. This is not the developers' fault, Microsoft allows the installation of these drivers, software developers use the feature provided by Microsoft. Apple long time ago kicked everyone out of the kernel space and only allows user mode extensions.

Malware also happens to operate in kernel mode in some instances. But not everything operating in kernel mode is malware.

I'm glad you can make good use of AI, but when chatting to AI, you need to provide all information and parameters to get an accurate response.
In this case you have no heavily-obfuscated, unsigned or hidden driver. You have a driver which is released in accordance with all best practices and as part of a legitimate application.
Okay thanks for your Reply and your insights about how i use AI but that is Because English is My 2nd Language
so that is why i use AI to fix grammar issues and summarize better Now
What you are stating is Solid and i wont Disagree Kaspersky has the Best malware Data base that is Reliable
and i agree that attacks from these groups are highly Targeted Correct
but kernel mode isn’t a crime it’s where legit stuff like AVs hang out too. And sure, kernel-level malware is rare compared to the mountain of user-mode junk out there.

But my concern isn’t about kernel mode itself. It’s about what this driver’s doing tripping PatchGuard on purpose, overwriting kernel32.dll, scanning for VMs. That’s straight-up reckless and can cause crashes and serious security risks. And the worst part? Most users are kids who don’t even realize their system’s being messed with.

Kernel drivers are normal. This driver? It’s playing dirty, Behaving like Malware. That’s why it needs to be looked at hard, no matter how many other drivers run in kernel mode.

Just looking for real kernel security folks to weigh in and confirm if this is crossing a dangerous line.
Appreciate the input so far.
 
But my concern isn’t about kernel mode itself. It’s about what this driver’s doing tripping PatchGuard on purpose, overwriting kernel32.dll, scanning for VMs. That’s straight-up reckless and can cause crashes and serious security risks. And the worst part? Most users are kids who don’t even realize their system’s being messed with.
I understand you concern and it is perfectly valid. But also, we need to look at it from a developer point of view. You think that relatively small set of actions will be easy to implement, a little bit of code, a little bit of graceful error handling and job done.

But in reality, when you start writing this “little bit of code” there are tens of little and not so little things coming up.

It could be done for a few reasons:
To find out whether a cheat has compromised PatchGuard
To redirect calls cheats can make to its own anti-cheat system (similar to how AV behavioural blocking works).
To scan for Virtual Maxhine

These practices are highly invasive and can lead to system instability. It’s sad to see a gaming platform going that far.

But again, this is all facilitated by legitimate operations Mictosoft allows, no attacks are used in the process and it will be very difficult (even when damage is caused) to prove that this damage is caused intentionally.

In Essence, yes. This is one piece of invasive, intrusive and nosy junkware that gains far more privileges than it needs.
But to prove it is illegal, the @bazang suggestion is the best one and it’s very costly.
 

You may also like...