Google Bans OpenClaw Users — A Warning Shot for the Agentic AI Era
Developers using OpenClaw's OAuth plugin to tap into subsidized Gemini tokens are finding their Google accounts restricted.
Developers using OpenClaw's OAuth plugin to tap into subsidized Gemini tokens are finding their Google accounts restricted.

Hundreds of developers woke up this week to 403 errors and locked Google accounts. The reason? They'd been using OpenClaw, a popular open-source AI agent, to access Google's Gemini models through the back door.
OpenClaw launched in November 2025 and quickly became a hit in the developer community — 219,000 GitHub stars in just a few months. The tool lets users run local AI agents for tasks like email management and web browsing. To make it work, many users authenticated through Google Antigravity, Google DeepMind's developer-facing Gemini platform, using an OAuth plugin that routed requests through Antigravity's backend infrastructure.
The catch: that infrastructure offers subsidized access to high-end models like Gemini 2.5 Pro. Using it to power a third-party product like OpenClaw directly violates Google's terms of service. When usage spiked and backend degradation followed, Google's automated systems flagged the traffic as "malicious usage" and started shutting accounts down — no warnings, no grace period.
Reports began flooding Google's AI Developer Forum around mid-February 2026. Developers paying $249.99 a month for AI Ultra subscriptions found themselves locked out not just from Antigravity, but in some cases from Gmail, Workspace, and account histories stretching back decades.
Varun Mohan, Google DeepMind's product lead, acknowledged on X that the surge "tremendously degraded service quality" for legitimate users, and said the enforcement was aimed at protecting "actual users." He clarified that Google services outside Antigravity were not intended to be affected — though the collateral damage tells a different story.
Peter Steinberger, OpenClaw's creator (who recently joined OpenAI), called the bans "draconian" and announced plans to drop Antigravity support entirely. Anthropic, for its part, had already updated its own terms to explicitly prohibit third-party OAuth usage in tools like OpenClaw.
The damage goes beyond lost API access. Security researchers had already flagged over 21,000 exposed OpenClaw instances vulnerable to infostealers targeting local config files. Supply chain risks were also identified. Add a mass ban wave on top, and developers relying on OpenClaw as part of their daily workflow are now scrambling.
The community is already forking — Nanobot and IronClaw are picking up displaced users fast. But the broader signal is hard to ignore: subsidized AI access has always had strings attached, and those strings can snap overnight.
Three things worth paying attention to if you use third-party AI agents:
The agentic AI space is still working out the rules as it goes.
But this week made one thing clear: platforms are paying attention, and enforcement, when it comes, moves fast.

With breaches averaging $4.88M and tool sprawl creating blind spots, this guide breaks down the four essential security tool categories every CISO needs to reduce risk, cut costs, and build a connected, high-impact stack.

Discover how simulating lateral movement with attack path analysis helps security teams identify and neutralize potential routes to crown jewel systems before attackers can exploit them.

While vulnerability scanning tells you what's broken, attack path modeling reveals what's actually dangerous by showing how attackers could chain exploits to reach your crown jewels.