Google Bans OpenClaw Users — A Warning Shot for the Agentic AI Era

Developers using OpenClaw's OAuth plugin to tap into subsidized Gemini tokens are finding their Google accounts restricted.

Google Bans OpenClaw Users — A Warning Shot for the Agentic AI Era

Hundreds of developers woke up this week to 403 errors and locked Google accounts. The reason? They'd been using OpenClaw, a popular open-source AI agent, to access Google's Gemini models through the back door.


What Happened?

OpenClaw launched in November 2025 and quickly became a hit in the developer community — 219,000 GitHub stars in just a few months. The tool lets users run local AI agents for tasks like email management and web browsing. To make it work, many users authenticated through Google Antigravity, Google DeepMind's developer-facing Gemini platform, using an OAuth plugin that routed requests through Antigravity's backend infrastructure.

The catch: that infrastructure offers subsidized access to high-end models like Gemini 2.5 Pro. Using it to power a third-party product like OpenClaw directly violates Google's terms of service. When usage spiked and backend degradation followed, Google's automated systems flagged the traffic as "malicious usage" and started shutting accounts down — no warnings, no grace period.

Reports began flooding Google's AI Developer Forum around mid-February 2026. Developers paying $249.99 a month for AI Ultra subscriptions found themselves locked out not just from Antigravity, but in some cases from Gmail, Workspace, and account histories stretching back decades.

Varun Mohan, Google DeepMind's product lead, acknowledged on X that the surge "tremendously degraded service quality" for legitimate users, and said the enforcement was aimed at protecting "actual users." He clarified that Google services outside Antigravity were not intended to be affected — though the collateral damage tells a different story.

Peter Steinberger, OpenClaw's creator (who recently joined OpenAI), called the bans "draconian" and announced plans to drop Antigravity support entirely. Anthropic, for its part, had already updated its own terms to explicitly prohibit third-party OAuth usage in tools like OpenClaw.


What's the Impact?

The damage goes beyond lost API access. Security researchers had already flagged over 21,000 exposed OpenClaw instances vulnerable to infostealers targeting local config files. Supply chain risks were also identified. Add a mass ban wave on top, and developers relying on OpenClaw as part of their daily workflow are now scrambling.

The community is already forking — Nanobot and IronClaw are picking up displaced users fast. But the broader signal is hard to ignore: subsidized AI access has always had strings attached, and those strings can snap overnight.


How to Avoid This

Three things worth paying attention to if you use third-party AI agents:

  • Read the ToS before you build on it. Using a platform's infrastructure to power a separate product — even unintentionally — often falls into a grey area that leans toward violation.
  • Don't route personal accounts through developer tools. The blowback here wasn't just API access. People lost Gmail. Keep work infrastructure and personal accounts separate wherever possible.
  • Watch for exposed config files. OpenClaw instances were found leaking credentials through local config files. Regularly audit what your agentic tools store and where they store it — especially if they authenticate against third-party platforms.

The agentic AI space is still working out the rules as it goes.

But this week made one thing clear: platforms are paying attention, and enforcement, when it comes, moves fast.