Google Gemini Privacy Controls Bypassed via Malicious Calendar Invites
A new vulnerability allowed attackers to manipulate Google Gemini into leaking private data simply by embedding malicious text in a calendar invitation.
A new vulnerability allowed attackers to manipulate Google Gemini into leaking private data simply by embedding malicious text in a calendar invitation.

Dateline: January 20, 2026
A significant vulnerability within the Google ecosystem allowed attackers to bypass Google Calendar’s privacy controls using nothing more than a standard calendar event invite. Discovered by researchers at Miggo Security, the flaw exploited "Indirect Prompt Injection" to turn Google’s AI assistant against its own users.
The exploitation process relied on the way Gemini parses context to be helpful, turning a benign feature into a data exfiltration tool. The attack chain consisted of three distinct phases:
This exploit is a prime example of Indirect Prompt Injection, a growing risk vector where AI models blindly trust incoming data from external sources (emails, websites, calendar invites). Unlike conventional malware, this attack contains no malicious code (only text) making it invisible to standard endpoint protection (EDR) and firewalls.
Google has since deployed mitigations to prevent Gemini from executing sensitive commands embedded in untrusted content. However, the incident reveals a critical takeaway for CISOs: As AI agents gain access to personal data, the "language" of your documents and invites becomes a potential attack surface.
"Vulnerabilities are no longer confined to code. They now live in language, context, and AI behavior at runtime." Liad Eliyahu, Head of Research at Miggo Security.

A critical RCE (CVSS 10.0) in n8n exposes automation pipelines and stored secrets to full compromise—upgrade to version 1.36.1 immediately.

Pre-production checks cannot secure a dynamic cloud. Find out how AI bridges the visibility gap between deployment and defense to catch what Shift-Left misses.

Misconfigurations are the #1 cause of cloud breaches—discover how to move from "finding" them to "fixing" them with AI-driven guardrails.