In March 2026, attackers broke into Context.ai’s AWS environment and stole OAuth tokens that its now-deprecated “AI Office Suite” had accumulated from consumer users. One of those tokens belonged to a Vercel employee who had signed up using their Vercel Google Workspace account and granted the integration broad scopes. The attacker used that token to log into Vercel’s Google Workspace as the employee, pivoted into Vercel’s internal systems, and exfiltrated a subset of customers’ non-sensitive environment variables, which were then advertised for sale on BreachForums for $2M by a poster claiming to be ShinyHunters. Both companies published bulletins on April 19, 2026. The shared failure mode is not prompt injection, not a bug in an LLM, and not an MCP protocol flaw — it is the decades-old pattern of a third-party OAuth application server-side-storing long-lived, broadly-scoped user tokens and becoming a high-value token vault whose compromise cascades into every connected tenant.
As a company with a central role in agentic security, we at Arcade did a deep dive into undertanding how this all played out. Here’s the breakdown
What broke
Vercel: An employee had connected Context.ai’s “AI Office Suite” to their corporate Google Workspace account with broad (“Allow All”) consent. When Context.ai’s AWS environment was breached, the attacker lifted that employee’s Google OAuth token and authenticated as the employee into Vercel’s Workspace, then escalated into Vercel-internal environments and read environment variables that customers had not flagged as “sensitive.” Vercel’s sensitive environment variables (which are stored so they cannot be read back) show no evidence of having been accessed.
Context.ai: Attackers gained unauthorized access to the AWS environment that ran the deprecated AI Office Suite, a consumer product that let users hook AI agents into external SaaS applications via a third-party integration layer. That environment held OAuth tokens for Office Suite users, and at least some were stolen. Context.ai initially disclosed to one customer in March 2026 and only learned the blast radius was larger after Vercel’s own investigation surfaced it. Context.ai has not disclosed the initial intrusion vector into AWS. CrowdStrike was engaged for forensics, but no public CrowdStrike report has been released as of April 21. The affected AWS environment, hosting, and the Office Suite OAuth application were taken down post-incident.
Common class: Confused-deputy token hoarding in a multi-tenant AI integration. A third-party AI service accumulated production OAuth tokens for many users’ primary identity providers, stored them server-side, and — when breached — became a one-hop pivot into every tenant that had ever authorized the app. The “AI” part is incidental to the exploitation; the vulnerability is a classic SaaS supply-chain credential-vault compromise amplified by over-broad OAuth scopes.
Auth model that failed:
-
At Vercel: Google Workspace OAuth 2.0 authorization-code grant with broadly-scoped consent from an enterprise identity. The Register quotes Context.ai stating Vercel’s “internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.” In other words, the Workspace admin controls did not restrict which third-party apps employees could authorize against corporate identities, and no per-app scope restriction was in place.
-
At Context.ai: Server-side storage of long-lived OAuth 2.0 refresh tokens (and likely short-lived access tokens) for a third-party IdP, in a shared multi-tenant AWS environment, combined with a consent UX that accepted broad (“Allow All”) scopes. Token encryption-at-rest state is not disclosed; the fact that tokens were usable post-exfiltration implies the encryption either did not cover the tokens or the decryption key was also reachable from the compromised environment.
IOC (verbatim from Vercel): OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com — Google Workspace admins are advised to check usage immediately. Vercel says the app “potentially affect[ed] its hundreds of users across many organizations.”
AI / MCP / tool-calling involvement: The breached product was an AI agent product (Context.ai AI Office Suite) that used OAuth to call external SaaS on users’ behalf. However, the actual exploitation did not use the AI model, did not use prompt injection, and did not require the MCP protocol. The AI product was relevant only because it was the entity that had requested and stored the broadly-scoped OAuth token. No CVE has been assigned as of the date of this report; none is expected because no software vulnerability in Vercel or Google was exploited.
The shared failure mode
Centralized OAuth token hoarding with over-broad scopes in a third-party agent platform, coupled with cross-tenant blast radius at the platform’s cloud control plane.
Mechanically, the pattern is:
- A third-party AI/agent platform requests OAuth consent to a user’s primary identity (Google Workspace, Microsoft 365, etc.) with scopes broad enough to be useful across many future tool calls — often because the product doesn’t know in advance which tools a user will invoke.
- Refresh tokens are stored server-side, centrally, by the platform, because the agent runs asynchronously or in a server-side loop and must be able to call provider APIs without the user present.
- The platform’s cloud environment is a single trust domain. A breach of that environment yields a token vault: one intrusion, many victims, at the IdP level — not merely at the platform level.
- The stolen token is indistinguishable from a legitimate session at the provider. There is no out-of-band attestation that the call originated from the platform rather than from an attacker replaying the token, because the provider issued the token to the platform in the first place.
- Enterprise IdP policy did not constrain which third-party apps could bind to corporate identities, or with which scopes. Vercel’s admin configuration allowed “Allow All” to an app described by Context.ai as issuing tokens reflecting “the scope of access that account held.”
This is the same class as the 2022 Heroku/Travis-CI GitHub OAuth incident and the 2023 CircleCI incident. It is not new with AI. What AI changes is the volume and breadth of scopes: an agent platform that might “send email, create docs, read drive, manage calendar, post to Slack” asks for a much larger scope union than a single-purpose integration, and therefore becomes a more valuable token vault.
What Arcade’s architecture does to keep your agents safe
Arcade.dev is the only MCP runtime for secure, reliable AI agent deployments.
Per-user OAuth token isolation. Arcade binds every authorization to a specific user: the tool declares requires_auth=Reddit(scopes=["read"]), and Arcade initiates an OAuth challenge for that user, the provider issues a token that Arcade stores keyed to that user, and the token is injected into the tool Context only during that user’s invocation.
Scope-per-tool, not scope-per-app. Each tool declares its minimum scopes. Gmail.SendEmail requests gmail.send, not a Workspace-wide scope union. This gives the user fine-grained control over what data third-party services can access and what actions they can take in their accounts. A compromised single-tool token has a narrow blast radius by construction.
LLM and MCP client don’t see the token. Arcade is designed to carefully handle OAuth tokens when agents invoke a tool. The OAuth token is fetched by the Engine and injected into a server-side Context object at runtime. The tool function makes the outbound HTTP call server-side. This closes the prompt-injection-to-token-leak path that exists when poorly-designed tools load tokens into the same process as the model.
Enforcing proper security boundaries. As we outlined on our MCP server authorization guide, Arcade respect the security boundaries between the MCP client (untrusted), MCP server (trusted), and Auth provider (trusted). This is the architectural inversion of the Context.ai pattern, where the agent runtime was also the token vault and a single breach collapsed both.
User verifier. When a user first authorizes a tool, Arcade performs a user verification check “that the user who is authorizing the tool is the same user who started the authorization flow, which helps prevent phishing attacks.” Production deployments are expected to implement a custom verifier so users authenticate against the application’s own identity system, not Arcade’s.
Contextual Access hooks. Arcade provides three hook points (access, pre-execution, post-execution) that let the operator inject webhooks that can deny a tool call, strip PII from a response, scan outputs for prompt injection, or log every invocation. Hooks chain at both organization and project level; fail-closed vs fail-open is configurable. This is the mechanism for “audit every interaction” and for catching exfiltration patterns at runtime.
Self-hosting as a blast-radius reduction. Arcade publishes on-premises MCP server deployment docs and a self-hosted Engine (read docs). Running Arcade in the customer’s own cloud collapses the cross-tenant risk demonstrated in the Context.ai case to a single-tenant risk.
Audit logs. Arcade’s control plane is fully auditable and provides an automatic, immutable record of every administrative action token on the agent runtime.
Conclusion
Calling this “an AI breach” is imprecise in a way that obscures the lesson. No model was jailbroken. No MCP flaw was exploited. No prompt injection fired. What happened is that a small AI company built a product that required broad, long-lived OAuth tokens to provide useful agentic behavior, stored them centrally, got breached, and delivered a ready-to-use corporate session to an attacker against one of its users’ employers. The AI angle matters because agentic products systematically ask for larger scope unions than single-purpose integrations, which makes their backends higher-value targets. However, in this case, the exploitation technique is a 2010s-era third-party OAuth pivot.
Arcade’s architecture attacks the two most damaging properties of that pattern directly: it scopes tokens per-tool rather than per-app, and it keeps tokens out of the LLM/client process so a compromise of the agent-facing surface does not equal a compromise of the token. Frankly, it does not make the control plane invincible, it does not read users’ minds about what scopes are reasonable, and it does not fix enterprise IdP policy. What it does is change the blast-radius math: a breach of an Arcade-style token vault yields narrower, more auditable tokens, and self-hosted deployments remove the cross-tenant leverage that made Context.ai’s single intrusion a multi-organization incident. The right takeaway for engineering leaders is narrower than “switch vendors”. It is “audit every third-party OAuth app bound to your corporate IdP, enforce scope minimums at the Workspace admin layer, and treat any agent platform that hoards refresh tokens as a production credential store.” Vercel has already changed one default in response: environment variable creation is now sensitive-by-default. The deeper default that still needs changing lives upstream of both companies, in the OAuth consent UX.
The Arcade.dev team has spent years hardening auth at Okta, Stormpath, and Redis. We’re applying those lessons to make AI infrastructure production-ready. See these principles in action in our blueprint for a secure OpenClaw alternative using Arcade and Claude Code.
Want to build AI agents that actually work in production? While we wait for authorization to land in MCP, Arcade already implements secure auth for 100+ integrations. No bot tokens, no security nightmares—just real OAuth flows that work.
Start building with Arcade → Sign Up.


