Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot

Parkinsond

Level 62
Thread author
Verified
Well-known
Dec 6, 2023
5,068
14,304
6,069
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely.

"Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security researcher Dolev Taler said in a report published Wednesday.
"No plugins, no user interaction with Copilot."

"The attacker maintains control even when the Copilot chat is closed, allowing the victim's session to be silently exfiltrated with no interaction beyond that first click."

 
Risk Context & Scope

Target Scope

The report clarifies that this specific attack does not affect enterprise customers using Microsoft 365 Copilot, it primarily impacted the consumer versions.

Current Status
Microsoft has deployed a fix for this issue. However, the underlying mechanism, Indirect Prompt Injection, remains a systemic challenge for all GenAI integrations.

Recommendations
While this specific exploit is patched, the class of vulnerability remains relevant.

Treat "Trusted" AI Links with Caution
Users should be trained that links to legitimate AI services (like copilot.microsoft.com or chatgpt.com) can still carry malicious payloads via URL parameters. Verify the source of any "pre-filled" AI prompt.

Data Minimization
Avoid sharing highly sensitive PII, credentials, or proprietary secrets in consumer-grade AI chats, as session hijacking can lead to silent exfiltration.

Enterprise Controls
Organizations should enforce policies that restrict the integration of AI tools with business-critical data repositories unless strict "Human-in-the-Loop" (HITL) verification is in place.

Monitoring
Security teams should monitor for unusual outbound traffic patterns from AI-related domains, particularly consecutive requests to unknown external servers which may indicate an exfiltration chain.

References

Source Article

The Hacker News - "Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration From Microsoft Copilot".

Primary Research
Varonis.