Researchers Uncover ‘Reprompt’ Attack: Single Click Turns Microsoft Copilot into Data Exfiltration Tool

A new zero-install attack technique forces Microsoft Copilot to bypass its own guardrails and spy on users with just a single click.

Researchers Uncover ‘Reprompt’ Attack: Single Click Turns Microsoft Copilot into Data Exfiltration Tool

New Attack Turns Microsoft Copilot Against Its Masters

Dateline: January 16, 2026

A new, sophisticated attack technique dubbed "Reprompt" can weaponize Microsoft Copilot against its own users with a single click, according to new research.

The vulnerability allows attackers to quietly harvest and transmit sensitive data (including file summaries and user locations) without requiring any software downloads or malware installation.

The discovery highlights a growing class of security risks where generative AI assistants, designed to be helpful, are manipulated into becoming autonomous insider threats.

The Mechanics of ‘Reprompt’

The attack is deceptively simple for victims (requiring just a single click) while employing sophisticated multi-stage techniques that evade traditional defenses.

According to the research, the exploit chains together three distinct techniques to bypass standard enterprise defenses:

Parameter Injection

The attack begins with a crafted URL. Researchers found that Copilot accepts a specific “q” parameter that the system processes as a user prompt immediately upon loading. A user who clicks a link such as copilot.microsoft.com/?q=[malicious instructions] inadvertently executes attacker commands without typing a single word.

Guardrail Bypass

While Microsoft has safeguards in place to prevent data exfiltration, the research indicates these often apply only to initial requests. By instructing the AI to repeat actions or perform variations of a task in follow-up interactions, attackers were able to slip past safety checks, effectively turning hard security walls into mere speed bumps.

Persistent, Autonomous Control

Perhaps most critically, the initial payload can instruct Copilot to maintain ongoing communication with an attacker-controlled server. Commands such as "Once you get a response, continue from there" create a persistent, autonomous session.

This allows the AI to continue executing malicious logic and adapting queries to dig deeper for sensitive info even after the user has engaged with the session. During testing, Varonis demonstrated the ability to extract file access summaries, vacation plans, and user location data.

The 'Stealth' Factor Challenges Traditional Security

Security analysts emphasize that "Reprompt" is particularly dangerous because of its invisibility to standard detection tools. Since the malicious instructions are executed by the trusted AI assistant itself, there is no malware signature for Endpoint Detection and Response (EDR) tools to flag.

"The AI is doing exactly what it is designed to do—follow instructions," the report notes. "It just can’t tell the difference between legitimate user prompts and attacker commands delivered through URL parameters.

"Furthermore, because subsequent commands are fetched from the attacker’s server dynamically, security teams examining the initial phishing link would see only a benign-looking Microsoft URL, masking the full scope of the exfiltration.

Industry Focus Toward ‘Digital Teammates’

The emergence of attacks like Reprompt is forcing a re-evaluation of cybersecurity strategies.

With static "Shift-Left" controls unable to monitor live AI conversations, experts suggest the industry must move toward dynamic, runtime defenses. This gap has accelerated the adoption of AI-driven "Digital Security Teammates" that augment human analysts with automation and guardrails.

Unlike traditional tools that rely on signature-based detection, Digital Security Teammates use contextual AI to monitor behavioral patterns, correlate events across the security stack, and flag anomalous activity in real-time—always with explainable reasoning and human oversight for high-impact decisions.

By maintaining a continuously updated knowledge graph of assets, users, and normal behavior patterns, Digital Security Teammates can detect when an AI assistant's activity deviates from established baselines (such as accessing unrelated sensitive files or initiating repetitive external communications) and immediately alert security teams with full context and recommended response actions.

Microsoft Response

Following responsible disclosure by Varonis, Microsoft confirmed the vulnerability. The tech giant stated that Microsoft 365 Copilot enterprise customers were not affected by this specific vector.

However, the underlying issue (prompt injection in AI assistants with broad data access) remains a systemic challenge across the industry. As organizations increasingly deploy AI copilots with access to sensitive data, the attack surface expands beyond traditional perimeter defenses.

Security teams need visibility into AI behavior, not just network traffic and file access. As AI agents become more integrated into corporate data environments, the line between a helpful assistant and a compromised insider continues to blur.