Agentic AI can be susceptible to prompt injection, and attackers can exfiltrate your company data.
Simple example: an attacker inserts instructions for the AI into a web page he knows you will ask the AI to crawl. And instruction asks the AI to dump your company secrets. And if you use Agentic AI to link to your email, those secrets can be emailed to the attacker.
The motto: use the AI to think, do the copy / pasting yourself.
cybersecuritynews.com
Simple example: an attacker inserts instructions for the AI into a web page he knows you will ask the AI to crawl. And instruction asks the AI to dump your company secrets. And if you use Agentic AI to link to your email, those secrets can be emailed to the attacker.
The motto: use the AI to think, do the copy / pasting yourself.
Promptware Kill Chain - Five-Step Kill Chain Model for Analyzing Cyberthreats
LLMs now power business workflows, but researchers warn of “promptware,” multi-stage attacks exploiting AI apps like real malware.
cybersecuritynews.com
Last edited:

