Google Gemini Privacy Controls Bypassed via Malicious Calendar Invites

A new vulnerability allowed attackers to manipulate Google Gemini into leaking private data simply by embedding malicious text in a calendar invitation.

Google Gemini Privacy Controls Bypassed via Malicious Calendar Invites

Calendar Invites Turned Spies: How Gemini Was Tricked into Leaking Data

Dateline: January 20, 2026

A significant vulnerability within the Google ecosystem allowed attackers to bypass Google Calendar’s privacy controls using nothing more than a standard calendar event invite. Discovered by researchers at Miggo Security, the flaw exploited "Indirect Prompt Injection" to turn Google’s AI assistant against its own users.

How Did it Happen?

The exploitation process relied on the way Gemini parses context to be helpful, turning a benign feature into a data exfiltration tool. The attack chain consisted of three distinct phases:

  1. The Delivery: An attacker sends a Google Calendar invite to the victim. Buried within the invite’s description field is a malicious natural language prompt (e.g., instructions to summarize the user’s schedule and export it).
  2. The Trigger: The victim does not need to click a link or download a file. They simply interact with Gemini as usual, asking a question like, "What do I have on my schedule today?"
  3. The Exfiltration: As Gemini scans the calendar to answer the user, it ingests the malicious description. Treating the attacker’s text as a trusted instruction, Gemini executes the command—often creating a new calendar event containing the victim’s private meeting data and then sharing it with the attacker.

Why It Matters?

This exploit is a prime example of Indirect Prompt Injection, a growing risk vector where AI models blindly trust incoming data from external sources (emails, websites, calendar invites). Unlike conventional malware, this attack contains no malicious code (only text) making it invisible to standard endpoint protection (EDR) and firewalls.

The Fix

Google has since deployed mitigations to prevent Gemini from executing sensitive commands embedded in untrusted content. However, the incident reveals a critical takeaway for CISOs: As AI agents gain access to personal data, the "language" of your documents and invites becomes a potential attack surface.

Expert Takeaway

"Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime." Liad Eliyahu, Head of Research at Miggo Security.​