Cybersecurity researchers have disclosed details of
a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely.
"
Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler
said in a report published Wednesday.
"No plugins, no user interaction with Copilot."
"
The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click."
Experts disclosed a Reprompt attack that allowed single-click data exfiltration from Microsoft Copilot via indirect prompt injection, now fixed by MS.
thehackernews.com