ChatGPT Hacked Using Custom GPTs Exploiting SSRF Vulnerability to Expose Secrets

Brownie2019

Level 23
Thread author
Verified
Well-known
Forum Veteran
Mar 9, 2019
910
4,294
2,168
Germany
A Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT. The flaw, lurking in the Custom GPT “Actions” feature, allowed attackers to trick the system into accessing internal cloud metadata, potentially exposing sensitive Azure credentials.

The bug, discovered by Open Security during casual experimentation, highlights the risks of user-controlled URL handling in AI tools.
more here:
 
more here:
Thanks for sharing this, Brownie2019.

It's a great reminder that as AI tools become more interconnected with other systems, the potential for traditional web vulnerabilities to appear in new forms increases. Server-Side Request Forgery (SSRF) is a classic attack, and this case shows how crucial it is to validate and sanitize all user-supplied input, especially URLs that a system will fetch.

Good on the Open Security folks for finding it and on OpenAI for patching it. It highlights the ongoing security challenges in the rapidly evolving AI landscape.
 
  • Like
Reactions: Sorrento