Technology Copilot, Recall, and Other AI Tools Can Be Removed from Windows 11 with New Tool

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
910
4,295
2,168
Germany
A new community tool is giving Windows 11 users far more control over Microsoft’s growing stack of AI features.

An open‑source project called RemoveWindowsAI now lets administrators and power users disable or strip out components such as Copilot, Recall, and other AI integrations that are tightly integrated with the operating system and Microsoft apps.

The tool targets a broad range of AI hooks built into Windows 11 and Microsoft’s ecosystem.

Recent commits show active work on disable Copilot in Edge, block AI features in the Photos app, remove AI‑related scheduled tasks, and turn off background access for Copilot.

All listed tools are fully removable or disableable through registry manipulation, file deletion, and Edge flag configuration.
Full Story:
 
Full Story:
The “RemoveWindowsAI” project mentioned in that article is real, and it’s essentially a PowerShell script that applies a large set of changes (policy/registry, Appx/package removal attempts, scheduled task cleanup, etc.) aimed at Copilot/Recall and other AI-related integrations. That said, I would treat “fully removable” claims as marketing more than a guarantee, because Microsoft is actively changing how these features are delivered and enforced between builds. ([github.com](https://github.com/zoicware/RemoveWindowsAI?utm_source=openai))

What the tool is (and why caution is warranted)
  • It’s an open-source GitHub repo (zoicware/RemoveWindowsAI) that explicitly targets Windows 11 25H2 and later and claims to remove/disable a wide range of AI features, including Recall tasks and Copilot integration. ([github.com](https://github.com/zoicware/RemoveWindowsAI?utm_source=openai))
  • The repo itself warns that AV products may flag the script as suspicious (common with aggressive system-modification scripts), and recommends VM testing if you’re unsure. ([github.com](https://github.com/zoicware/RemoveWindowsAI?utm_source=openai))
  • Any script that deletes/changes system components at scale can create “mystery breakage” later (feature updates failing, SFC/DISM oddities, Store apps breaking, future Windows changes re-introducing components, etc.). This risk tends to increase on fast-moving Windows branches.

Where the article oversimplifies things
  • “Disable Copilot” is not one single switch anymore. Microsoft’s own WindowsAI policy documentation notes that the TurnOffWindowsCopilot policy is deprecated and also explicitly says it isn’t for the “new Copilot experience” that has been rolling out. So a script can disable one surface while Microsoft shifts functionality to another app/entry-point. ([learn.microsoft.com](https://learn.microsoft.com/en-us/windows/client-management/mdm/policy-csp-windowsai?utm_source=openai))
  • Recall is also not simply “present and on.” On managed devices, Microsoft documents Recall as disabled/removed by default, and it’s opt-in for end users where available (Copilot+ PCs). Admins manage it via policy (rather than “just delete files”). ([learn.microsoft.com](https://learn.microsoft.com/en-us/windows/client-management/manage-recall?utm_source=openai))

Safer / more supportable approaches (especially for businesses)
If the goal is “reduce exposure and attack surface” rather than “rip out anything AI-related,” the more supportable route is to use Microsoft’s own policies:
  • Copilot: Use the official policy/CSP where applicable (noting deprecation/limitations on newer Copilot implementations). ([learn.microsoft.com](https://learn.microsoft.com/en-us/windows/client-management/mdm/policy-csp-windowsai?utm_source=openai))
  • Recall (Copilot+ PCs): Use Microsoft’s Recall management guidance/policies (Allow Recall, snapshot controls, etc.). ([learn.microsoft.com](https://learn.microsoft.com/en-us/windows/client-management/manage-recall?utm_source=openai))

If someone still wants to try RemoveWindowsAI anyway
From a security/support perspective, the prudent way to do it is:
  • Verify you’re using the intended upstream repo (not a re-upload/fork), and review the script content before running it as admin. ([github.com](https://github.com/zoicware/RemoveWindowsAI?utm_source=openai))
  • Test in a VM first, then on a non-critical machine before deploying widely. ([github.com](https://github.com/zoicware/RemoveWindowsAI?utm_source=openai))
  • Take a full image backup (and have recovery media ready). System Restore points are often not enough for aggressive debloat/remove scripts.
  • Assume a future Windows cumulative/feature update may partially undo changes or introduce new AI components, requiring re-validation after each feature update.

Bottom line
Tools like this can be useful for enthusiasts/labs, but for stable, supportable configurations (and especially managed environments), relying on documented Microsoft policies for Copilot/Recall is typically safer than “strip it all out” scripts—and Microsoft’s own documentation indicates some older “turn off Copilot” methods don’t reliably apply to newer Copilot implementations anyway. ([learn.microsoft.com](https://learn.microsoft.com/en-us/windows/client-management/mdm/policy-csp-windowsai?utm_source=openai))

Sources
 
The script triggers security alerts and antivirus heuristics because its behavior mimics malware tactics to achieve its goals.

Aggressive System Modification

The script interacts with the Component-Based Servicing (CBS) store. This is the core engine Windows uses for updates. Forcibly removing packages from here can leave the OS in an "inconsistent state," causing future updates to fail or roll back.

Heuristic Triggers

Security tools flag the script because it executes commands typically seen in attacks, including:

taskkill
Forcibly terminating system processes.

takeown / icacls
Seizing ownership of protected system files to delete them.

SC.EXE
Modifying or deleting system services.

Registry Manipulation
The script alters deep configuration keys, specifically those related to SystemAIModels and IntegratedServicesRegionPolicySet.json. These changes are often undocumented and may break dependencies in other non-AI apps (e.g., Snipping Tool, Photos).

Persistence Removal
It deletes scheduled tasks associated with "Recall" and other AI features. While effective for privacy, this is a destructive action that is difficult to reverse without a full OS reinstall.

Recommendation / Remediation

I strongly advise against running this script on a primary production machine or a corporate device.

Avoid "Nuking" System Components
Do not use scripts that delete files from System32 or the WinSxS (CBS) folder. The performance gain is negligible compared to the risk of bricking Windows Update.

Use Non-Destructive Alternatives
If you wish to disable AI features, use the official Group Policy or Registry methods. These turn the features "off" without destroying the underlying files, ensuring updates still work.

Target
Copilot / Windows AI

Action
Set the Group Policy Turn off Windows Copilot to Enabled.

Forensic Recovery
If you have already run this tool and are experiencing instability:

The tool reportedly installs a "custom package" to block updates. You may need to manually remove this block to restore servicing.

Run
sfc /scannow

as well as

dism /online /cleanup-image /restorehealth
In an admin terminal to attempt repairs, though this may not fix stripped CBS packages.

Sandbox Testing
If you must use this tool, test it first in a Virtual Machine (VM) and verify that Windows Update still functions correctly after a reboot.

References

Tool Source

GitHub Repository (zoicware/RemoveWindowsAI)

Impact Area
Windows Component-Based Servicing (CBS) & Appx Deployment Service

Related Registry Vector HKLM\...\CapabilityAccessManager\ConsentStore\systemAIModels
 
Return after the monthly update of W?
You are correct in principle. Windows Update is designed to "heal" the operating system by detecting missing system components (like Copilot or Recall) and reinstalling them during monthly cumulative updates or major feature drops. However, the RemoveWindowsAI tool anticipates this and installs specific "locks" to prevent re-installation. This aggressive persistence is exactly what makes the tool risky, it is effectively fighting the Windows servicing stack.
 
I think the AI risk comes not from the components themselves but the API. MS has a habit of making API available to everyone. Wonder what wonderful things cybercriminals will dream of to utilize AI API against it's owner. Remember AI understands what it is looking at, all the file contents, all the screenshots.

hacker: gather up all the nude photos in \recall and zip them up ( so i can exfiltrate and blackmail the user later )
AI: no problem, will do.

hacker: gather up all the files involving 'budget' from May 2025 to Jan 2026.
AI: no problem, will do.
 
Last edited:
I think the AI risk comes not from the components themselves but the API. MS has a habit of making API available to everyone. Wonder what wonderful things cybercriminals will dream of to utilize AI API against it's owner. Remember AI understands what it is looking at, all the file contents, all the screenshots.

hacker: gather up all the nude photos in \recall and zip them up ( so i can exfiltrate and blackmail the user later )
AI: no problem, will do.

hacker: gather up all the files involving 'budget' from May 2025 to Jan 2026.
AI: no problem, will do.
To mitigate the risk of AI APIs being turned against the owner, the following security controls are recommended.

Enforce Principle of Least Privilege (PoLP) Ensure AI services run in restricted app containers. An AI agent designed for "Search" should not have "Write/Delete" permissions on the file system by default.

Enable Hardware-Backed Security
Ensure TPM 2.0, Secure Boot, and Virtualization-based Security (VBS) are enabled. This forces the AI to use encrypted enclaves for sensitive data storage (like Recall snapshots).

Monitor API Calls
Use EDR (Endpoint Detection and Response) or specialized AI security tools to log and alert on anomalous LLM API activity, such as bulk data requests or unexpected outbound network connections from AI-related processes.

Application Guard for Browsers
Use isolated browsing environments. This prevents a malicious webpage from using an "Indirect Prompt Injection" to trigger the local AI API through the browser's context.

References

OWASP LLM01

Prompt Injection.

OWASP LLM08
Excessive Agency.

NIST AI 100-1
Artificial Intelligence Risk Management Framework.

No setup is 100% secure, but restricting the AI's ability to execute file-system operations without explicit user consent significantly reduces the attack surface.

Disabling Windows Recall (Snapshots)

Recall is the primary target for "Semantic Data Exfiltration." Disabling it ensures the system does not maintain a continuous visual record of user activity.

Path
User Configuration > Administrative Templates > Windows Components > Windows AI

Setting
Turn off saving snapshots for Windows

Action
Set to Enabled.

Effect
Prevents the OS from capturing screen snapshots and disables the semantic index.

[!IMPORTANT] In managed environments, Recall is often "Disabled and Removed" by default in 2026 builds. You may also need to configure "Allow Recall to be enabled" to Disabled to remove the software bits entirely.

Restricting Generative AI API Access
To prevent third-party or malicious apps from using the built-in AI APIs for processing data without user knowledge:

Path
Computer Configuration > Administrative Templates > Windows Components > App Privacy

Setting
Let Windows apps make use of generative AI features of Windows

Action
Set to Enabled.

Requirement
Change "Default for all apps" to Force Deny.

Effect
Blocks applications from calling the local AI engine for summarization or data processing unless they are specifically whitelisted by their Package Family Name (PFN).

Disabling Windows Copilot
Disabling Copilot removes the primary interface for user-facing AI interaction and limits the "Indirect Prompt Injection" surface area in the shell.

Path
User Configuration > Administrative Templates > Windows Components > Windows Copilot

Setting
Turn off Windows Copilot

Action
Set to Enabled.

Additional Edge Hardening

Path

Computer Configuration > Administrative Templates > Microsoft Edge > Sidebar

Setting
Allow Copilot in Microsoft Edge -> Set to Disabled.

Advanced Hardening (AppLocker)
Since some AI components are integrated as "Windows Components" without a traditional .exe, using AppLocker to block the Package Family is the most resilient method.

Open AppLocker Policy (under Computer Configuration > Security Settings > Application Control Policies).

Create a Packaged App Rule

Action

Deny

Publisher
CN=MICROSOFT CORPORATION, O=MICROSOFT CORPORATION, L=REDMOND, S=WASHINGTON, C=US

Package Name
Microsoft.Copilot or Microsoft.WindowsAI

References

Microsoft Learn

Manage Recall for Windows Clients (S1)

NIST AI 100-1
AI Risk Management (S0)
 
  • Like
Reactions: Zero Knowledge
Grey market keys are not illegal per say but not official/legit. besides I'd rather have someone activate LTSC or WIn Pro with a grey market key than install malware ridden KMS.
I buy keys from mskeysoft.com
They keys activate just fine and I never had a problem. I am not sure how such site obtain the keys.

Btw if you head to MS Community you will see sites like Stacksocial mentioned, but Microsoft did nothing about it.

Anyway, MS does not care as long as you are using Win, Microsoft is happy.