Scams & Phishing News 25 Million Users Affected as AI Chat Platform Leaks 300 Million Messages

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
920
4,315
2,168
Germany
“Chat & Ask AI,” a highly popular mobile application available on both Google Play and the Apple App Store, has suffered a significant data exposure.
An independent security researcher discovered a vulnerability that left approximately 300 million private messages accessible to the public.
This breach impacts more than 25 million users, raising serious concerns about privacy and data handling in the booming AI app market.
The exposure was identified by a security researcher known as Harry, who reported his findings to 404 Media.
According to the analysis, the root cause of the leak was not a sophisticated cyberattack but a simple misconfiguration.
Read more on:
 
Read more on:
What you quoted matches a very common real-world failure mode: a cloud database (Google Firebase) left too open due to misconfiguration, leading to a large-scale “data exposure” rather than a sophisticated device compromise. In the reporting, the researcher (“Harry”) said the exposed backend allowed access to chat histories and related metadata, and that the dataset size was on the order of hundreds of millions of messages across tens of millions of users. ([gbhackers.com](https://gbhackers.com/ai-chat-platform-leaks-300-million-messages/))

What this likely means for users (risk-wise)
  • This is primarily a privacy/confidentiality issue: anything typed into the app should be treated as potentially copied and retained by third parties, even if the database is later secured. ([404media.co](https://www.404media.co/massive-ai-chat-app-leaked-millions-of-users-private-conversations/))
  • It does not automatically mean your phone was “infected” or that the big model vendors (OpenAI/Anthropic/Google) were breached; the reporting describes a third-party “wrapper” app and its backend storage exposure. ([404media.co](https://www.404media.co/massive-ai-chat-app-leaked-millions-of-users-private-conversations/))
  • Secondary risk: exposed content can be used for targeted phishing, extortion attempts, or social engineering (because the chats can contain very personal details). This is a general consequence of sensitive data exposure.

Practical steps if someone has used “Chat & Ask AI”
  • Stop using the app for sensitive content immediately; consider uninstalling it.
  • If the app has an account: change that password (and anywhere else you reused it) and enable MFA on your email account (email takeover is often the most damaging follow-on).
  • Use the app’s in-app options (if present) to delete chat history and request account/data deletion; also revoke any unnecessary permissions in iOS/Android settings.
  • Be extra cautious with unexpected emails/texts that reference things “only you would know” (a classic sign your private data was exposed).
  • If you’re worried your specific device is compromised (separate from the data exposure): run a scan with a reputable mobile security product and review installed apps for anything you don’t recognize. (A backend leak alone doesn’t prove device compromise.)

Takeaway for the broader “AI wrapper app” ecosystem
Treat third-party wrapper apps as higher risk than first-party offerings: you’re trusting not just the model provider, but also the wrapper’s storage, auth, logging, and security testing practices. ([404media.co](https://www.404media.co/massive-ai-chat-app-leaked-millions-of-users-private-conversations/))

Sources
 
  • Wow
Reactions: Sorrento
The most concerning aspect is not the sophistication of the attack, but the negligent configuration. An unsecured database can be more dangerous than any advanced exploit. It’s yet another reminder that technology can be brilliant, but if the door is left unlocked, the thief walks in effortlessly. 🛡️ 🗄️ 🚪