TL;DR — The Vercel breach started when Context.ai’s OAuth tokens were stolen via infostealer malware. The attacker pivoted into a Vercel employee’s Google Workspace, then into Vercel’s internal systems. If you use Cursor, Claude Code, v0, or any AI tool connected to your accounts, you have the same attack surface. Here’s your 5-minute audit. Jump to the audit walkthrough →

📊 What this post covers:

  • How OAuth supply chain attacks differ from npm supply chain attacks
  • The full Vercel/Context.ai attack chain, step by step
  • A 5-minute OAuth audit for Google, GitHub, Slack, Notion, and npm
  • Scope hygiene that would have reduced the Vercel blast radius to near zero
  • A 1-hour response playbook for when your vendor gets breached

I ran this audit on my own Google account last week. I found 23 OAuth apps connected. Three were AI tools I’d forgotten I authorized. One had full Drive access. Here’s the breach that made me check:

Attacker
→ Lumma Stealer malware on Context.ai employee's machine
→ Steals Context.ai's Google Workspace OAuth tokens
→ Accesses Vercel employee's Google Workspace (email, Drive, calendar)
→ Pivots into Vercel internal systems
→ 580 employee records exposed
→ Customer environment variables leaked
→ ShinyHunters lists data for $2M on BreachForums

On April 19, 2026, Vercel disclosed that a third-party AI tool called Context.ai (an AI analytics platform) had been compromised via Lumma Stealer malware. The attacker didn’t hack Vercel directly. They hacked an AI tool that a Vercel employee had granted “Allow All” OAuth permissions to. That single OAuth grant gave the attacker access to the employee’s entire Google Workspace. From there, into Vercel’s internal systems. 580 employee records. Customer environment variables. A $2M ransom demand from ShinyHunters.

You’re probably thinking: that’s Vercel’s problem. It’s not. If you’ve connected any AI tool to your Google, GitHub, or Slack account, you have the exact same attack surface. This is a different category from the npm supply chain attacks we’ve covered before. npm attacks compromise your build pipeline. OAuth attacks compromise your identity and workspace.

Let’s fix that in 5 minutes.


What is an OAuth supply chain attack?

An OAuth supply chain attack compromises a third-party app’s OAuth tokens to access everything that app was authorized to reach: your email, your repos, your cloud credentials. Unlike npm supply chain attacks (which execute malicious code in your build pipeline), OAuth attacks target your identity and workspace. The attacker doesn’t need your password. They already have a valid token.

Here’s the critical difference:

Attack typeWhat gets compromisedEntry pointMFA helps?
npm supply chainBuild pipeline, local machineMalicious package codeN/A
OAuth supply chainIdentity, workspace dataStolen OAuth tokenNo
Credential phishingSingle accountFake login pageYes

OAuth tokens bypass MFA by design. They’re issued after initial authentication. Once granted, they act independently. No password prompt. No second factor. An attacker with a stolen OAuth token has the same access as the app itself, for as long as the token is valid.

Key insight: 30% of all data breaches are now linked to third-party or supply chain issues, doubled year-over-year (SOCRadar, 2025). OAuth token theft is the fastest-growing category because it bypasses the MFA and SSO controls that organizations invested heavily in during 2023-2025.


How did the Vercel breach actually work?

A Vercel employee authorized Context.ai with “Allow All” Google Workspace permissions. Context.ai was later infected with Lumma Stealer malware (a commodity infostealer sold on dark web markets for roughly $250/month). The attacker extracted Context.ai’s OAuth tokens and used them to access the employee’s Google Workspace, then pivoted into Vercel’s internal systems.

Here’s the timeline:

WhenWhat happened
~Feb 2026Lumma Stealer infects Context.ai employee’s machine
Feb-Mar 2026Attacker extracts OAuth tokens from Context.ai’s systems
Mar 2026Attacker uses stolen tokens to access Vercel employee’s Google Workspace
Apr 19, 2026Vercel publishes security bulletin, CEO confirms Context.ai as the vector
Apr 20, 2026ShinyHunters lists stolen data on BreachForums for $2M

Context.ai wasn’t malicious. They were a victim too. The real failures were systemic: overly broad OAuth scope (“Allow All” instead of minimum necessary permissions) and no monitoring of third-party token usage patterns.

This wasn’t a nation-state attack. It was commodity infostealer malware on a startup employee’s machine. The same malware that ships in pirated software and phishing emails. The barrier to entry is low.

Key insight: The Vercel breach exposed 580 employee records and customer environment variables through a single compromised OAuth token from a third-party AI tool (TechCrunch, April 2026). The attacker didn’t exploit a zero-day or bypass a firewall. They used a valid token that was already authorized.


What AI tools are connected to YOUR accounts right now?

The average organization has 17 AI app integrations connected to Google and Microsoft alone. 98% of organizations have employees using unsanctioned AI tools. Your personal accounts are probably worse, because nobody is auditing them.

Here’s what common AI coding tools can access if compromised:

ToolWhat it connects toBlast radius if compromised
CursorGitHub repos (read/write)Source code, commits, secrets in code
Claude Code (MCP)GitHub, Drive, Slack, DBsDepends on MCP server config
v0 by VercelVercel account, GitHubDeployments, env vars, source code
Zapier AI / n8nEmail, calendar, CRM, DBsEverything the workflow touches
Otter / FirefliesGoogle Calendar, email, DriveMeeting transcripts, contacts, files
Notion AINotion workspaceAll pages the integration can read

The Vercel breach happened because one employee connected one AI tool with broad permissions. Now multiply that by every AI tool every engineer on your team has authorized in the past year.

Shadow SaaS app inventories grow 25% every 60 days. If you haven’t audited in two months, your attack surface is already larger than you think.

Key insight: Push Security detected an average of 17 unique AI app integrations per organization connected to Google and Microsoft alone, while most organizations officially approve only 1-2 AI apps for business use (Push Security, April 2026). The gap between sanctioned and actual AI tool usage is the attack surface.


How do you audit your OAuth permissions in 5 minutes?

Go to myaccount.google.com/permissions right now. Count the apps. For each one: do you still use it? Does it have more access than it needs? If you can’t answer both questions, revoke it. Here’s the full checklist across five platforms.

1. Google Workspace

URL: myaccount.google.com/permissions

Review each app. Look for:

  • Apps you don’t recognize or haven’t used in months
  • Apps with “See, edit, create, and delete” access to Drive (that’s drive scope, not drive.file)
  • Apps with Gmail access (can read every email you’ve ever received)

Revoke anything you don’t actively use. You can always re-authorize later.

2. GitHub

URL: github.com/settings/applications

Check both tabs: OAuth Apps and GitHub Apps. They’re different. Look for:

  • Apps with “repo” scope (full read/write to all repos, including private)
  • Classic personal access tokens with no expiration date
  • Fine-grained tokens with more repos than necessary

3. Slack

Path: Workspace settings > Manage apps > Your authorized apps

Any app with chat:write and channels:read can read your messages and post as you.

4. Notion

Path: Settings > My connections > review integrations

Check which pages each integration can access. Notion’s default is “all pages.” Restrict to specific pages where possible.

5. npm

URL: npmjs.com/settings/~/tokens

Check for tokens you didn’t create. After the axios compromise, attackers specifically targeted npm tokens for lateral movement. If you see a classic token with no expiration, replace it with a granular token scoped to specific packages.

Try it now:

  1. Open myaccount.google.com/permissions in a new tab
  2. Count the apps listed and revoke any you don’t recognize
  3. Revoke any app you haven’t used in 3 months
  4. Repeat at github.com/settings/applications (check both OAuth Apps and GitHub Apps tabs)
  5. Total time: under 5 minutes

What is OAuth scope hygiene and why does it matter?

The Vercel employee granted “Allow All” permissions to Context.ai. If they had granted read-only access to a single Drive folder instead, the blast radius would have been near zero. Scope hygiene means giving every tool the minimum permissions it needs, and nothing more.

Here’s what “broad” vs “narrow” looks like with real scope names:

PlatformBroad scope (dangerous)Narrow scope (safe)
Google Drivedrive (all files, read/write)drive.file (only files the app creates)
Gmailgmail.modify (read, send, delete)gmail.readonly (read only)
GitHubClassic PAT with repo (all repos)Fine-grained token, 1 repo, read-only
Slackadmin (full workspace access)chat:write on specific channels

When an AI tool requests https://www.googleapis.com/auth/drive, it gets full read/write access to everything you can see in Drive. Not just your personal files. Shared drives too. Team folders. That one Google Doc with production database credentials someone shared last year.

Three rules for scope hygiene:

  1. Read-only by default. Most AI tools don’t need write access. If the OAuth consent screen says “edit” or “delete,” ask why.
  2. Scope to specific resources. GitHub fine-grained tokens let you select individual repos. Google’s drive.file scope limits access to files the app created. Use them.
  3. Separate accounts for AI tools. If an AI tool needs broad access (meeting transcripts, email context), use a dedicated Google account with no access to sensitive data.

Key insight: An app requesting googleapis.com/auth/drive gets full read/write access to everything the user can see in Drive, not just their personal files (Push Security, 2026). The difference between drive and drive.file scope is the difference between “everything in your workspace” and “only files this app created.”


How often should you rotate OAuth tokens?

Every 90 days for high-privilege tokens. Immediately after any vendor discloses a breach. Automate it where possible.

Token typeRotation intervalHow to automate
GitHub fine-grained PAT90 days (built-in expiration)Set expiration at creation time
GitHub classic PATReplace with fine-grainedClassic tokens never expire by default
Google OAuthQuarterly (revoke + re-authorize)Calendar reminder or Nudge Security
npm token90 daysnpm token create with --cidr restriction
Slack app tokenQuarterlyWorkspace admin > Manage apps

After a vendor breach: revoke ALL tokens for that vendor within 1 hour. Not just the tokens you think were compromised. All of them. The Vercel breach started with tokens that nobody knew were at risk until the disclosure.

GitHub’s fine-grained tokens are the gold standard here. They expire by default, scope to specific repos, and support IP restrictions. If you’re still using classic PATs, this is your sign to migrate.

Get weekly security tips for AI coders. Hooks, OAuth hardening, and real breach breakdowns. One email per week. Subscribe to AI Developer Weekly →


What do you do when your vendor gets breached?

You have one hour. Revoke all OAuth tokens for the compromised vendor. Rotate any secrets the vendor could have accessed. Check audit logs for unusual access patterns. Then decide whether to re-authorize with narrower scopes or find an alternative.

4-step emergency response

Step 1: Revoke OAuth tokens (0-15 min)

  • Google: myaccount.google.com/permissions > find the app > Remove Access
  • Google Workspace admin: Admin console > Security > API controls > Manage Third-Party App Access
  • GitHub: github.com/settings/applications > Revoke

Step 2: Rotate exposed secrets (15-30 min)

  • Rotate any API keys, database credentials, or env vars the vendor could have accessed
  • npm: npm token revoke <id> then npm token create
  • AWS/GCP/Azure: rotate IAM keys if the vendor had cloud access

Step 3: Check audit logs (30-45 min)

  • Google Workspace: Admin console > Reports > Audit > Drive/Gmail
  • GitHub: Settings > Security log > filter by the app name
  • Look for access patterns outside normal hours or from unusual IPs

Step 4: Post-mortem (45-60 min)

  • Decide: re-authorize with minimum scopes, or switch to an alternative tool?
  • Document what the vendor had access to and what was potentially exposed
  • If you’re a team lead: draft a runbook so next time steps 1-3 are scripted

Solo devs: steps 1-2 take 15 minutes. Team leads: you need this runbook ready BEFORE the breach happens. The Vercel breach was disclosed on a Saturday. If your response plan requires a security team that’s off for the weekend, you’re already too slow.

Key insight: Third-party breaches average nearly $5 million per incident, and the average detection time is 267 days (IBM Cost of a Data Breach, 2025). The Vercel breach went undetected for approximately two months before disclosure. Having a pre-written response runbook cuts your exposure window from days to hours.


How does this connect to the npm supply chain defense?

OAuth supply chain and npm supply chain are two sides of the same coin: together they account for the majority of developer tool compromises in 2025-2026, with third-party involvement in breaches doubling year-over-year (SOCRadar, 2025). npm attacks execute code in your build pipeline. OAuth attacks steal your identity and workspace data. A complete defense covers both.

LayerAttack surfaceDefensePost
Level 1npm lifecycle scripts~/.npmrc hardeningThe 30-Second npm Defense
Level 2npm installs via AI agentsPreToolUse hooksStop npm Attacks with Hooks
Level 3OAuth tokens for AI toolsAudit + scope hygieneThis post

The Vercel breach could have been prevented at multiple points: narrower OAuth scopes, regular token audits, monitoring for unusual third-party access. None of these require vendor cooperation or ecosystem-wide changes. They’re all things you can do today, on your own machine, in 5 minutes.

If you’re protecting secrets when using Claude Code on real projects, OAuth tokens are secrets too. They grant access to your Google Drive, your GitHub repos, your Slack messages. Treat them with the same rigor you’d treat an API key in your .env file.

Try it now:

  1. Open three tabs: myaccount.google.com/permissions, github.com/settings/applications, and npmjs.com/settings/~/tokens
  2. Revoke every app you don’t actively use
  3. Replace any classic GitHub PAT with a fine-grained token scoped to specific repos
  4. Set a 90-day calendar reminder to repeat this audit

Want to know when the next AI tool gets breached? One email per week, security-focused. Join AI Developer Weekly →


FAQ

Can OAuth tokens bypass MFA?

Yes. OAuth tokens are issued after initial authentication. Once granted, they act independently of MFA. An attacker with a stolen OAuth token never needs your password or second factor. This is by design: OAuth separates authorization from authentication. The token proves the app was authorized, not that the current user is authenticated.

Is revoking an OAuth app the same as deleting your account with that app?

No. Revoking removes the app’s access to your Google/GitHub/Slack data. Your account with the app (and any data stored on their servers) still exists. You may also want to delete your account with the vendor separately, especially if the vendor was compromised and may have stored your data.

How do I know if an AI tool has been compromised?

You usually don’t, until the vendor discloses. Monitor vendor status pages and subscribe to security bulletins. Check Google Workspace audit logs (Admin console > Reports > Audit) for unusual third-party access patterns. Push Security and Nudge Security can automate this monitoring for teams. For individuals, a quarterly manual audit is the practical minimum.

Should I use a separate Google account for AI tools?

Yes, if you’re connecting AI tools that need Google Workspace access. Use a dedicated account with no access to sensitive Drive folders, email, or admin privileges. This limits blast radius to that isolated account. If Context.ai had been connected to a sandboxed Google account instead of a Vercel employee’s primary workspace account, the breach would have exposed nothing of value.