

The Vercel Breach: A Technical Response Playbook for CISOs and Engineers
The Vercel Breach: A Technical Response Playbook for CISOs and Engineers
A technical response playbook for CISOs and engineers following the Vercel breach—covering kill chain analysis, credential rotation, secret scanning, OAuth risks, and incident response actions.
Read Time
7 min read
Posted On
Social Media
true
On 19th April 2026, Vercel, the company behind Next.js and Turbopack, one of the most widely used frontend deployment platforms, disclosed that an attacker had gained unauthorized access to its internal systems. Guillermo Rauch, CEO of Vercel, confirmed the full attack path on 20th April. The kill chain consisted of a compromised third-party AI tool called Context.ai, which was the entry point, an employee's Google Workspace account was the pivot, and non-sensitive environment variables across customer projects were the prize.
A threat actor claiming affiliation with ShinyHunters posted the stolen data on BreachForums with a $2M asking price, claiming access to internal databases, employee accounts, GitHub tokens, npm tokens, and source code fragments.
This article is not a news summary. It is a hands-on, command-by-command response guide. If you deploy on Vercel, have a Next.js application, or store secrets in any PaaS environment variable system, this is your checklist.
>Disclosure: This playbook is published by RiskProfiler.io. Our External Attack Surface Management (EASM) platform continuously discovers and fingerprints Vercel-hosted assets, exposed `.env` endpoints, dangling `*.vercel.app` subdomains, and leaked credentials across the clear, deep, and dark web — including BreachForums, where the Vercel data was listed. Several of the detection techniques described below are automated inside our platform. We are publishing them here in full so that every security team can act immediately, regardless of tooling.
1. Understanding the Kill Chain
The third-party breach attack followed the mechanism of a textbook OAuth supply chain escalation:
Stage 1 — Infostealer on a Context.ai employee
According to the report published by Hudson Rock, an employee working at Context.ai was compromised via Lumma Stealer in February 2026. Using malicious game exploit downloads (Roblox auto-farm scripts) as infection vectors, the attackers harvested credentials that included Google Workspace logins, Supabase, Datadog, and AuthKit keys.
Stage 2 — Context.ai OAuth app compromise
Context.ai operated a Google Workspace OAuth application (the "AI Office Suite") that allowed AI agents to perform actions across connected external applications. The attacker compromised OAuth tokens for Context.ai's consumer users.
Stage 3 — Vercel employee account takeover
A Vercel employee had installed Context.ai's Chrome extension / AI Office suite and granted Allow All permissions to their corporate Google Workspace. The compromised OAuth token let the attacker pivot directly into this employee's Vercel Google Workspace account.
Stage 4 — Internal enumeration
From the Workspace foothold, the attacker escalated into Vercel's internal environments. Environment variables not flagged as "sensitive" were stored in an insecure manner that allowed enumeration. The attacker moved with what Rauch described as "surprising velocity and in-depth understanding of Vercel's systems", likely AI-accelerated.
Stage 5 — Data exfiltration
The attacker claims to have obtained: internal databases, employee account access, GitHub tokens, npm tokens, API keys, source code fragments, and activity timestamps.
2. The Published IOC
Following the breach confirmation, Vercel published exactly one indicator of compromise:
OAuth App ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
This is the Context.ai Google Workspace OAuth application. Every Google Workspace administrator reading this should check for it immediately.
How to Check in Google Workspace Admin Console
For Workspace Admins:
Navigate to
admin.google.com→ Security → API controls → App Access Control → Manage Third-Party App AccessSearch for the OAuth client ID:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqjAlternatively, go to Reporting → Audit and investigation → OAuth log events
Filter by the app client ID above
Look for any authorization events, token grants, or API access from this app
For Individual Google Accounts:
Go to
myaccount.google.com/permissionsReview all third-party apps with access to your account
Look for anything related to Context.ai, "AI Office Suite," or unfamiliar AI productivity tools
Revoke access immediately if found
If this OAuth app appears anywhere in your logs, treat it as evidence of potential compromise and initiate a full incident response.
3. Immediate Triage: What to Rotate and in What Order
When Vercel environment variables are exposed, response speed matters. This section outlines a practical rotation order based on impact, urgency, and the need to fully invalidate exposed secrets across deployments.
Tier 1 — Rotate Within the Hour (Crown Jewels)
These credential types give an attacker direct access to production data or the supply chain:
Database connection strings stored as Vercel env vars (PostgreSQL, MySQL, MongoDB Atlas, PlanetScale, Supabase, Neon)
Cloud provider credentials — AWS access keys, GCP service account JSON, Azure client secrets
Payment processor keys — Stripe secret keys, PayPal client secrets
Auth/signing secrets — JWT signing keys, NextAuth
SECRET, session encryption keys, HMAC secretsGitHub Personal Access Tokens that were stored in Vercel env vars or that authorized the Vercel GitHub App installation
npm tokens — especially if you publish packages from Vercel CI/CD
Tier 2 — Rotate Within 24 Hours
Third-party SaaS API keys (SendGrid, Twilio, Resend, Postmark, Algolia, etc.)
CMS API tokens (Contentful, Sanity, Strapi, Prismic)
Analytics and monitoring tokens (Datadog, Sentry, LogRocket)
Vercel Deploy Hook URLs (these are full deploy triggers — an attacker can redeploy your app)
Vercel Deployment Protection tokens
Tier 3 — Rotate Within 72 Hours
Public-facing API keys that have backend restrictions (Google Maps, reCAPTCHA)
Feature flag service tokens (LaunchDarkly, Unleash, Split)
Any OAuth client secrets stored in env vars
Critical Detail: Rotating Keys Is Not Enough — You Must Redeploy
Vercel's architecture means that rotating an environment variable does not retroactively invalidate old deployments. Prior deployments continue using the old credential value until they are redeployed. Every credential rotation must be followed by:
A fresh
vercel --proddeploymentDeletion or disabling of all previous deployment artifacts
Verification that the old credential no longer works from the old deployment URL
4. Scanning Your Codebase for Leaked Secrets
This section covers a practical secret-scanning workflow for Vercel and Next.js projects, using multiple tools to identify leaked credentials, verify their status, and uncover hardcoded patterns that may have been missed in routine reviews.
4.1 Pull Your Vercel Environment Variables
For every Vercel project you own, pull the current environment variables locally and scan them:
# Pull env vars for each project
cd /path/to/your-project
vercel env pull .env.local
# If you have multiple environments
vercel env pull .env.production --environment production
vercel env pull .env.preview --environment preview
vercel env pull .env.development --environment development
4.2 Scan with GitGuardian (ggshield)
# Install
pip install ggshield
# Authenticate (requires GitGuardian API key — free tier available)
ggshield auth login
# Scan the pulled env files
ggshield secret scan path .env.local
ggshield secret scan path .env.production
# Scan your entire project directory
ggshield secret scan path . --recursive
# Scan your entire git history for any previously committed secrets
ggshield secret scan repo .
4.3 Scan with TruffleHog (Open Source)
TruffleHog's key differentiator is live credential verification — it actually tests whether discovered secrets are still active:
# Install
brew install trufflehog
# or
pip install trufflehog
# Scan git repo with verification (shows only ACTIVE credentials)
trufflehog git file://. --results=verified --fail
# Scan the filesystem (catches .env files, config files, etc.)
trufflehog filesystem . --results=verified
# Scan your entire GitHub org
trufflehog github --org=your-org-name --token=$GITHUB_TOKEN --results=verified
# Scan S3 buckets where build artifacts may live
trufflehog s3 --bucket=your-build-bucket --key=$AWS_ACCESS_KEY --secret=$AWS_SECRET_KEY
# Scan Docker images (if you ship containers from Vercel builds)
trufflehog docker --image=your-registry/your-image:latest
4.4 Scan with Gitleaks
Gitleaks is lighter and faster than TruffleHog for pure git-history scanning:
# Install
brew install gitleaks
# Scan current repo including all git history
gitleaks detect --source . -v
# Scan only staged changes (use as pre-commit hook)
gitleaks protect --staged
# Generate a JSON report for tracking
gitleaks detect --source . --report-format json --report-path gitleaks-report.json
# Scan with custom rules targeting Vercel-specific patterns
gitleaks detect --source . --config /path/to/custom-gitleaks.toml
4.5 Custom Gitleaks Config for Vercel/Next.js Projects
Create a gitleaks-vercel.toml file:
title = "Vercel/Next.js Secret Patterns"
[[rules]]
id = "vercel-api-token"
description = "Vercel API Token"
regex = '''(?i)(?:vercel|zeit)[\w\-]*(?:token|key|secret|api)[\s=:]+["\']?([A-Za-z0-9]{24,})["\']?'''
tags = ["vercel", "api-token"]
[[rules]]
id = "next-auth-secret"
description = "NextAuth Secret"
regex = '''NEXTAUTH_SECRET\s*=\s*["\']?([^\s"\']+)["\']?'''
tags = ["nextauth", "secret"]
[[rules]]
id = "database-url"
description = "Database Connection String"
regex = '''(?i)(?:DATABASE_URL|DB_URL|POSTGRES_URL|MYSQL_URL|MONGODB_URI)\s*=\s*["\']?((?:postgres|mysql|mongodb(?:\+srv)?):\/\/[^\s"\']+)["\']?'''
tags = ["database", "connection-string"]
[[rules]]
id = "vercel-deploy-hook"
description = "Vercel Deploy Hook URL"
regex = '''https:\/\/api\.vercel\.com\/v1\/integrations\/deploy\/[A-Za-z0-9_\-]+'''
tags = ["vercel", "deploy-hook"]
[[rules]]
id = "stripe-secret-key"
description = "Stripe Secret Key"
regex = '''(?:sk_live_|sk_test_)[A-Za-z0-9]{24,}'''
tags = ["stripe", "payment"]
[[rules]]
id = "jwt-secret-inline"
description = "JWT Secret Inline"
regex = '''(?i)(?:jwt[\._\-]?secret|signing[\._\-]?key)\s*[=:]\s*["\']?([A-Za-z0-9+/=]{16,})["\']?'''
tags = ["jwt", "auth"]
[[rles]]
id = "aws-access-key"
description = "AWS Access Key"
regex = '''(?:AKIA|ABIA|ACCA|ASIA)[0-9A-Z]{16}'''
tags = ["aws", "cloud"]
[[rules]]
id = "npm-token"
description = "NPM Access Token"
regex = '''(?:npm_[A-Za-z0-9]{36}|\/\/registry\.npmjs\.org\/:_authToken=.+)'''
tags = ["npm", "supply-chain"]
Run it:
gitleaks detect --source . --config gitleaks-vercel.toml -v
5. GitHub Search Queries (Dorks) for Exposed Secrets
If you need to check whether any of your organization's secrets have leaked to public GitHub repositories, use these search queries. Replace YOUR_ORG with your GitHub organization or username.
Vercel-Specific Patterns
# Vercel tokens in code
org:YOUR_ORG VERCEL_TOKEN
# Vercel API tokens in env files
org:YOUR_ORG filename:.env VERCEL
# Deploy hooks (these are weaponizable URLs)
org:YOUR_ORG "api.vercel.com/v1/integrations/deploy"
# Vercel project configuration with secrets
org:YOUR_ORG filename:vercel.json "env"
# Vercel CLI auth tokens
org:YOUR_ORG filename:.vercel "token"
Next.js Environment Variable Leaks
# .env files that should never be committed
org:YOUR_ORG filename:.env.local
org:YOUR_ORG filename:.env.production
org:YOUR_ORG filename:.env.production.local
# Server secrets accidentally prefixed as public
org:YOUR_ORG "NEXT_PUBLIC_" "secret"
org:YOUR_ORG "NEXT_PUBLIC_" "password"
org:YOUR_ORG "NEXT_PUBLIC_" "DATABASE"
# NextAuth secrets
org:YOUR_ORG NEXTAUTH_SECRET
org:YOUR_ORG NEXTAUTH_URL "secret"
# Database URLs in Next.js configs
org:YOUR_ORG filename:next.config DATABASE_URL
org:YOUR_ORG filename:next.config "postgres://"
org:YOUR_ORG filename:next.config "mongodb+srv://"
Generic High-Value Secret Patterns
# AWS credentials
org:YOUR_ORG AKIA
org:YOUR_ORG AWS_SECRET_ACCESS_KEY
org:YOUR_ORG filename:.env AWS_ACCESS_KEY
# Stripe
org:YOUR_ORG sk_live_
org:YOUR_ORG filename:.env STRIPE_SECRET
# npm tokens
org:YOUR_ORG "npm_" filename:.npmrc
org:YOUR_ORG "_authToken" filename:.npmrc
# GitHub tokens
org:YOUR_ORG ghp_
org:YOUR_ORG github_pat_
org:YOUR_ORG filename:.env GITHUB_TOKEN
# JWT and session secrets
org:YOUR_ORG JWT_SECRET filename:.env
org:YOUR_ORG SESSION_SECRET filename:.env
# Private keys
org:YOUR_ORG "BEGIN RSA PRIVATE KEY"
org:YOUR_ORG "BEGIN OPENSSH PRIVATE KEY"
org:YOUR_ORG "BEGIN EC PRIVATE KEY"
# Generic connection strings
org:YOUR_ORG "postgresql://" filename:.env
org:YOUR_ORG "redis://" filename:.env
org:YOUR_ORG "amqp://" filename:.env
Broader Internet OSINT (Google Dorks)
For checking whether environment files from your domain have been indexed by search engines:
# Exposed .env files on your domain
site:yourdomain.com filetype:env
# Exposed .env files on your Vercel deployments
site:*.vercel.app filetype:env
# Exposed Next.js build manifests (can reveal internal routes)
site:yourdomain.com "_next/static" "buildManifest"
# Git directories accidentally exposed
site:yourdomain.com "/.git/config"
# Source maps that might contain inlined secrets
site:yourdomain.com filetype:map "_next"
6. Auditing Your Next.js Application for Env Var Exposure
Beyond the Vercel platform breach, there are application-level risks specific to Next.js that every engineer should audit.
6.1 The NEXT_PUBLIC_ Prefix Problem
Any environment variable prefixed with NEXT_PUBLIC_ is inlined into the JavaScript bundle at build time and shipped to every browser that loads your application. Audit your codebase:
# Find all NEXT_PUBLIC_ references in your codebase
grep -rn "NEXT_PUBLIC_" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" .
# Find NEXT_PUBLIC_ vars in your .env files
grep "NEXT_PUBLIC_" .env* 2>/dev/null
# Check what's actually in your production bundle
# Build first, then inspect
next build
grep -r "NEXT_PUBLIC_" .next/static/ 2>/dev/null | head -50
Red flags to look for:
NEXT_PUBLIC_DATABASE_URL— database connection strings should never be public
NEXT_PUBLIC_API_SECRET— anything with "secret" should not have this prefix
NEXT_PUBLIC_STRIPE_SECRET_KEY— payment secret keys exposed to the browser
NEXT_PUBLIC_JWT_SECRET— authentication signing keys in the client bundle
NEXT_PUBLIC_ADMIN_*— admin credentials or endpoints exposed to all users
6.2 Server-Side Secrets Leaking via SSG/ISR
Even without the NEXT_PUBLIC_ prefix, server-side environment variables can leak into statically generated HTML if referenced in getStaticProps or page-level components that render during build. Check:
# Search built HTML for potential secret patterns
find .next/server -name "*.html" -exec grep -l "sk_live\|AKIA\|postgres://\|mongodb+srv\|BEGIN.*PRIVATE" {} \;
# Check for secrets in server-rendered page data
find .next/server -name "*.json" -exec grep -l "password\|secret\|token\|apiKey" {} \;
# Inspect Next.js data payloads (__NEXT_DATA__)
find .next -name "*.html" -exec grep -o "__NEXT_DATA__.*</script>" {} \; | head -20
6.3 Source Maps in Production
Source maps can expose your entire server-side codebase including inlined secrets:
# Check if source maps are being generated in production
grep -r "productionBrowserSourceMaps\|devtool.*source-map" next.config.* 2>/dev/null
# Check if source maps are accessible on your live site
curl -s -o /dev/null -w "%{http_code}" https://yourdomain.com/_next/static/chunks/main.js.map
# If 200, your source maps are publicly accessible — disable immediately
In next.config.js, ensure:
module.exports = {
productionBrowserSourceMaps: false, // This MUST be false in production
}
7. Auditing OAuth Apps and Third-Party Integrations
The Vercel breach is fundamentally an OAuth supply chain attack. One employee granting Allow All permissions to a third-party AI tool was the entire initial access vector.
7.1 Google Workspace OAuth Audit
# Using Google Workspace Admin SDK (requires admin access)
# List all OAuth tokens issued in your domain
# Via Admin Console:
# 1. admin.google.com → Security → API Controls → App Access Control
# 2. Review every app listed — especially those with:
# - "Allow All" or broad scopes
# - Access to Gmail, Drive, Calendar simultaneously
# - Apps you don't recognize
# - AI productivity tools (these are the new attack surface)
Key scopes to flag as high-risk:
<https://www.googleapis.com/auth/gmail.modify> — can read/send email
<https://www.googleapis.com/auth/drive> — full Drive access
<https://www.googleapis.com/auth/admin.directory.user> — can manage users
<https://mail.google.com/> — full Gmail access (the nuclear scope)
Any scope from an AI agent/assistant tool
7.2 Vercel Integration Audit
# List your Vercel integrations via CLI
vercel integrations list
# Check the Vercel dashboard:
# Settings → Integrations → Review every connected service
# Pay attention to:
# - GitHub integration (what repos does it have write access to?)
# - Linear integration (reported as disproportionately impacted)
# - Any custom integrations
# Review Vercel activity logs for the exposure window
vercel logs --since 2026-04-01
# Or via the dashboard: vercel.com/[team]/~/activity
# Filter for:
# - env.read, env.list API calls
# - Deployment creations from unexpected sources
# - Integration permission changes
# - New team member additions
7.3 GitHub Integration Audit
# Check which apps have access to your GitHub org
# Go to: github.com/organizations/YOUR_ORG/settings/installations
# Via API:
curl -H "Authorization: Bearer $GITHUB_TOKEN" \
https://api.github.com/orgs/YOUR_ORG/installations
# Check your org audit log for the exposure window
# github.com/organizations/YOUR_ORG/settings/audit-log
# Filter for:
# - integration_installation events
# - repo.access events
# - workflow_dispatch (manual action triggers)
# - release and tag creation events you didn't make
# Check GitHub Actions history for unexpected runs
gh run list --repo YOUR_ORG/YOUR_REPO --limit 50 --json event,conclusion,createdAt
# Check for unexpected branches
git branch -r --list 'origin/*' | while read branch; do
echo "$branch: $(git log -1 --format='%ai %an' "$branch")"
done
7.4 NPM Supply Chain Verification
If you publish npm packages and your publish workflow touches Vercel infrastructure at any point:
# Check npm publish history for unexpected versions
npm view YOUR_PACKAGE time --json
# Compare the published tarball against your git tag
npm pack YOUR_PACKAGE@latest
tar -xzf YOUR_PACKAGE-*.tgz
diff -r package/ /path/to/your/git/repo/dist/
# Check npm access tokens
npm token list
# Revoke any tokens you can't account for
npm token revoke $TOKEN_ID
# Enable 2FA on publish if not already
npm profile enable-2fa auth-and-writes
# Check if your package has been flagged by Socket.dev
npx socket analyze YOUR_PACKAGE
8. Third-Party Risk: Why Context.ai Should Have Been on Your Radar
The entire Vercel breach traces back to a single vendor: Context.ai, a small AI productivity tool that most security teams had never heard of, let alone assessed. This is the fundamental gap that Third-Party Risk Management (TPRM) is supposed to close, and where most TPRM programs fail.
Context.ai was not a vendor that went through procurement. It was a Chrome extension that an employee installed on their own. It requested, and was granted, `Allow All` OAuth permissions to a corporate Google Workspace. No security review. No scope restriction. No ongoing monitoring.
What a TPRM Platform Should Have Caught
This is the kind of incident a mature TPRM program is supposed to catch earlier. Third-party risk today is not limited to annual reviews or static questionnaires. It requires continuous discovery of shadow SaaS, real-time monitoring of vendor posture, visibility into OAuth permissions, and contextual escalation when external signals suggest a vendor may be drifting into active risk.
Shadow SaaS discovery
The first step is knowing that an unknown third-party tool, like Context.ai, exists in your environment at all. RiskProfiler's TPRM module discovers third-party services connected to your organization by correlating DNS records, OAuth grant logs, SSO authentication events, and browser extension telemetry. When an employee authorizes a new AI tool against your Google Workspace, that event should surface in your TPRM dashboard as an unvetted vendor, not buried in a Workspace admin log that nobody checks.
Vendor security posture scoring
RiskProfiler continuously assesses the external security posture of your third-party vendors. For Context.ai, that assessment would have flagged: a small team with limited security maturity, no SOC 2 or ISO 27001 certification, broad OAuth scope requirements (`Allow All`), a Chrome extension with access to sensitive Workspace data, and an AWS infrastructure footprint that, as we now know, was itself breached a month before Vercel was compromised. These signals compound into a risk score that should have triggered review before the tool was ever authorized.
Continuous vendor monitoring
Modern third-party security posture can not be managed with point-in-time questionnaires. Context.ai's Chrome extension was removed from the Chrome Web Store on 27 March, three weeks before Vercel disclosed the breach. RiskProfiler's continuous monitoring would have flagged that removal as a risk signal (vendors pulling their own extensions from public marketplaces is a strong indicator of a security event) and escalated it to the vendor management team.
OAuth scope analysis
RiskProfiler maps the OAuth scopes each third-party vendor holds across your organization. An AI productivity tool holding `mail.google.com` (full Gmail access), `drive` (full Drive access), and broad Workspace admin scopes in a single grant is a textbook over-permission pattern. Our platform flags these as critical findings and recommends scope reduction or vendor removal.
The lesson is not "don't use AI tools." The lesson is: every OAuth grant to a third-party SaaS is a trust decision with supply chain consequences. If your TPRM program doesn't discover, assess, and continuously monitor these grants, you are flying blind — and the next Context.ai is already installed in your environment.
9. Open Source Tools Reference
Tool | Purpose | Install | Key Command |
TruffleHog | Multi-source secret scanning with live verification |
|
|
Gitleaks | Fast git-history secret scanning |
|
|
ggshield | GitGuardian CLI (550+ secret types) |
|
|
detect-secrets | Yelp's baseline-oriented scanner |
|
|
git-secret-scanner | Combined TruffleHog + Gitleaks org scanner | GitHub: padok-team/git-secret-scanner |
|
github-dork.py | Automated GitHub dork search | GitHub: techgaun/github-dorks |
|
Nuclei | Template-based vulnerability scanner |
|
|
Socket CLI | npm/PyPI supply chain analysis |
|
|
Nudge Security | SaaS/OAuth app discovery | Discover shadow SaaS and OAuth grants | |
Semgrep | Static analysis with secret detection rules |
|
|
10. Cloud Provider Log Queries
If cloud or payment credentials were exposed, review logs immediately to check for misuse, persistence, or data access during the exposure window.
10.1 AWS CloudTrail
If AWS credentials were stored in Vercel env vars, check CloudTrail for unauthorized usage. The exposure window is conservatively from 1 April 2026 to the present day.
# Search for API calls from unexpected IPs or user agents
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIA_YOUR_KEY \
--start-time "2026-04-01T00:00:00Z" \
--end-time "2026-04-21T23:59:59Z" \
--output json
# Check for IAM enumeration (common first move after key theft)
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetCallerIdentity \
--start-time "2026-04-01T00:00:00Z"
# Look for S3 data exfiltration
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetObject \
--start-time "2026-04-01T00:00:00Z" \
--max-results 100
# Check for key creation (persistence mechanism)
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=CreateAccessKey \
--start-time "2026-04-01T00:00:00Z"
10.2 GCP Audit Logs
# Search for unusual API activity from service accounts
gcloud logging read \
'protoPayload.authenticationInfo.principalEmail="YOUR_SERVICE_ACCOUNT" AND
timestamp>="2026-04-01T00:00:00Z"' \
--project=YOUR_PROJECT \
--format=json \
--limit=500
# Check for data access from unusual IPs
gcloud logging read \
'protoPayload.requestMetadata.callerIp!="YOUR_KNOWN_IP" AND
timestamp>="2026-04-01T00:00:00Z"' \
--project=YOUR_PROJECT
10.3 Stripe Dashboard
If Stripe keys were in non-sensitive Vercel env vars:
Go to
dashboard.stripe.com→ Developers → API KeysRoll your secret key immediately
Check Developers → Events for any suspicious API calls during the exposure window
Look for: charge creation, customer export, balance transfers, account updates
Check Developers → Webhooks for newly created webhook endpoints (persistence mechanism)
11. Detection Queries for SIEM/Log Platforms
These detection queries help you quickly spot suspicious activity linked to potentially exposed credentials across major SIEM and log platforms.
Splunk
# Detect Vercel-related credential usage from unusual sources
index=cloudtrail sourcetype=aws:cloudtrail
userIdentity.accessKeyId IN ("AKIA_YOUR_EXPOSED_KEY_1", "AKIA_YOUR_EXPOSED_KEY_2")
| stats count by sourceIPAddress, eventName, userAgent
| where sourceIPAddress!="YOUR_KNOWN_CIDR"
# Detect Google Workspace OAuth grants for the compromised app
index=gworkspace sourcetype=google:workspace:activity
events.name="authorize"
events.parameters.client_id="110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj"
| table _time, actor.email, events.parameters.app_name, events.parameters.scope
Datadog
# Search for API calls using potentially compromised credentials
@evt.name:AwsApiCall @usr.access_key_id:AKIA_YOUR_KEY
-@network.client.ip:YOUR_KNOWN_IP
| stats count by @network.client.ip, @evt.name
Elastic/OpenSearch
{
"query": {
"bool": {
"must": [
{"match": {"userIdentity.accessKeyId": "AKIA_YOUR_KEY"}},
{"range": {"eventTime": {"gte": "2026-04-01T00:00:00Z"}}}
],
"must_not": [
{"match": {"sourceIPAddress": "YOUR_KNOWN_IP"}}
]
}
}
}
Dark Web and Credential Leak Monitoring
The SIEM queries above detect usage of compromised credentials against your own infrastructure. But you also need to know when your credentials appear on dark web marketplaces, paste sites, stealer log dumps, and Telegram channels.
The Vercel breach data appeared on BreachForums hours before Vercel published its security bulletin. Trend Micro's analysis identified a nine-day gap between the earliest evidence of credential exposure (an OpenAI leaked-credential notification on 10 April) and Vercel's public disclosure on 19 April. During that window, credentials were circulating in the hidden channels with no customer notification.
This is exactly the detection gap that RiskProfiler's Digital Risk Protection (DRP) module is designed to close:
Stealer log monitoring
RiskProfiler’s agentic AI-powered dark web threat intelligence module, KnyX Dark Web AI, ingests and correlates stealer log datasets (Lumma, RedLine, Vidar, Raccoon, and others) against your organization's email domains, IP ranges, and asset inventory. The Vercel breach originated from a Lumma Stealer infection on a Context.ai employee in February 2026. Utilizing proactive third-party risk management programs like RiskProfiler, similar stealer log hits can be flagged well in advance, helping businesses identify such cascading breaches before they can escalate.
BreachForums and dark web marketplace monitoring
When the threat actor posted Vercel data on BreachForums with a $2M asking price, RiskProfiler's dark web crawlers identified and parsed the listing, extracted the claimed data categories (GitHub tokens, npm tokens, API keys, source code), and correlated them against customer asset inventories. Organizations with active dark web monitoring modules received alerts within hours of the listing, not days, allowing companies to contain the breach before it can be manipulated for unauthorized access.
Credential leak correlation
RiskProfiler cross-references leaked credentials against your known asset inventory with its attack path mapping feature. If a database connection string for `your-company-db.us-east-1.rds.amazonaws.com` appears in a leak, and that hostname is in your external attack surface management inventory, the alert flags the incident with full context, the leaked credential, the asset it unlocks, and the blast radius if exploited.
Automated provider notifications
As Trend Micro noted, services like OpenAI, AWS, GitHub, and Stripe operate their own leaked-credential detection systems. RiskProfiler aggregates these notifications alongside its own detection to ensure that no signal is missed when a credential surfaces in the wild.
12. Hardening Checklist: Preventing the Next One
Once containment is complete, the focus shifts to reducing the chance of the same exposure happening again. This hardening checklist covers the key controls across Vercel, Google Workspace, Next.js, and your supply chain to help close common gaps and strengthen long-term resilience.
Vercel-Specific
☑︎ Mark all environment variables containing secrets as "sensitive" in Vercel (now defaults to on for new vars)
☑︎ Audit and remove any unused environment variables
☑︎ Enable Deployment Protection at Standard or above
☑︎ Rotate all Deploy Hook URLs
☑︎ Review and minimize GitHub integration repository scope
☑︎ Remove any Linear or other third-party integrations you aren't actively using
☑︎ Enable 2FA on your Vercel account with an authenticator app or passkey
☑︎ Review team membership — remove anyone who shouldn't have access
Google Workspace
☑︎ Audit all OAuth app grants across your organization
☑︎ Implement an OAuth app allowlist — block unapproved apps from accessing Workspace data
☑︎ Enforce least-privilege scopes for approved apps
☑︎ Set up alerting on new OAuth grants, especially those with broad scopes
☑︎ Review and restrict the ability of users to grant OAuth access to enterprise accounts
☑︎ Enable Google Workspace DLP rules to flag credential-like patterns in email and Drive
Next.js Application
☑︎ Audit every NEXT_PUBLIC_ variable — ensure no secrets are prefixed this way
☑︎ Use the server-only package to enforce server/client boundaries
☑︎ Disable productionBrowserSourceMaps in production
☑︎ Add Gitleaks as a pre-commit hook to prevent secrets from entering git history
☑︎ Add TruffleHog to your CI pipeline with --results=verified --fail ☑︎ Implement the NEXTJS_SAFE_NEXT_PUBLIC_ENV_USAGE conformance rule
Supply Chain
☑︎ Enable npm 2FA for auth-and-writes
☑︎ Pin GitHub Actions to commit SHAs, not tags
☑︎ Audit npm view <your-package> time --json for unexpected publish events
☑︎ Compare published tarballs to your git tags
☑︎ Use Socket.dev or Snyk to monitor your dependency tree for compromised packages
External Attack Surface and Vendor Risk (Continuous)
☑︎ Deploy an EASM platform to continuously discover and fingerprint all Vercel-hosted assets across your organization
☑︎ Monitor for exposed `.env` files, source maps, and `__NEXT_DATA__` payloads at the edge
☑︎ Enable dark web and credential leak monitoring against your domains and known asset hostnames"
☑︎ Detect dangling DNS records (CNAMEs to `cname.vercel-dns.com` for deleted projects)
☑︎ Inventory all third-party SaaS with OAuth access to your Google Workspace / Azure AD / Okta
☑︎ Score and continuously monitor the security posture of every vendor with OAuth grants
☑︎ Set up alerts for vendor signals: Chrome extension removals, domain changes, breach disclosures
☑︎ Monitor stealer log feeds for credentials associated with your organization and your critical vendors
☑︎ Correlate leaked credentials with your asset inventory to assess blast radius automatically
13. The Bigger Picture: OAuth Is the New Lateral Movement
This breach is not an isolated incident. It fits a 2026 convergence pattern where attackers consistently target developer-stored credentials across CI/CD platforms, package registries, OAuth integrations, and deployment platforms. The LiteLLM and Axios supply chain attacks earlier this year followed similar patterns.
The core lesson is architectural. OAuth tokens granted to third-party SaaS, especially AI productivity tools, are high-value credentials that need to be managed with the same rigor as SSH keys or cloud IAM roles. An employee installing a Chrome extension and clicking "Allow All" on their corporate Google account is now a top-tier initial access vector.
Until the industry treats OAuth tokens as the high-value credentials they are, we will keep reading the same breach report with different vendor names swapped in.
Where RiskProfiler Fits in This New Reality
The Vercel breach crossed three domains that are traditionally handled separately across most security programs. RiskProfiler, with its agentic AI-powered consolidated external threat intelligence platform, unifies the scanning, detection, and prioritization across these siloed modules, enabling contextual, comprehensive visibility and controlled remediation.
External Attack Surface Management (EASM)
RiskProfiler’s external attack surface management capability gives you an adaptive view of the internet-facing assets that enable fast and effective incident containment and remediation. In the case of this breach, RiskProfiler helps you keep track of all Vercel-hosted applications, preview and production deployments, exposed APIs, certificates, subdomains, cloud-connected services, and ownership relationships. In a similar case, that visibility is what lets teams quickly identify which environments are exposed, which deployments are still reachable, and where attacker-accessible blind spots may still exist. KnyX Recon AI, the agentic AI-powered EASM module, continuously monitors these external assets and maps them contextually, reducing the inventory gaps that slow triage and containment.
Digital Risk Protection (DRP)
The real danger is not just the initial exposure, but what happens in the gap before formal disclosure. RiskProfiler’s Brand Risk Protection, Identity Intelligence, and Dark Web Monitoring capabilities are built to detect leaked credentials, access codes, malicious brand references, look-alike domains, phishing kits, exposed tokens, code, and sensitive records across open, deep, and dark web sources. That means security teams can identify when leaked Vercel-linked secrets, source fragments, or impersonation infrastructure begin circulating externally, then move faster on secret rotation, takedowns, and customer-protection workflows before attackers scale abuse.
Third-Party Risk Management (TPRM)
This incident also shows why third-party exposure can no longer be treated as a separate workflow from breach response. RiskProfiler’s Third-Party Risk Management capability continuously monitors supply chain connections and extended vendor relationships, combining adaptive vendor risk questionnaires, threat scoring, and ongoing exposure scanning to surface suppliers, SaaS tools, and integrations that increase blast radius. In a case like the Vercel breach, that helps teams assess which vendors had excessive access, which shadow tools introduced avoidable risk, and which third-party relationships now require reassessment, restriction, or remediation.
RiskProfiler unifies all three on a single platform, powered by our agentic AI engine KnyX, which correlates signals across your attack surface, vendor ecosystem, and dark web exposure to surface the alerts that matter, filtering out the noise that doesn't.
If the Vercel breach has taught you anything, it should be this: the blast radius of a modern supply chain attack spans your deployment platform, your vendor ecosystem, and the dark web — simultaneously. Your detection and response capability needs to span all three as well.
Request a demo today or explore the platform to understand how RiskProfiler unifies your threat management workflow.
This document will be updated as Vercel releases additional IOCs and as the Mandiant investigation progresses. Last updated: 21 April 2026.
On 19th April 2026, Vercel, the company behind Next.js and Turbopack, one of the most widely used frontend deployment platforms, disclosed that an attacker had gained unauthorized access to its internal systems. Guillermo Rauch, CEO of Vercel, confirmed the full attack path on 20th April. The kill chain consisted of a compromised third-party AI tool called Context.ai, which was the entry point, an employee's Google Workspace account was the pivot, and non-sensitive environment variables across customer projects were the prize.
A threat actor claiming affiliation with ShinyHunters posted the stolen data on BreachForums with a $2M asking price, claiming access to internal databases, employee accounts, GitHub tokens, npm tokens, and source code fragments.
This article is not a news summary. It is a hands-on, command-by-command response guide. If you deploy on Vercel, have a Next.js application, or store secrets in any PaaS environment variable system, this is your checklist.
>Disclosure: This playbook is published by RiskProfiler.io. Our External Attack Surface Management (EASM) platform continuously discovers and fingerprints Vercel-hosted assets, exposed `.env` endpoints, dangling `*.vercel.app` subdomains, and leaked credentials across the clear, deep, and dark web — including BreachForums, where the Vercel data was listed. Several of the detection techniques described below are automated inside our platform. We are publishing them here in full so that every security team can act immediately, regardless of tooling.
1. Understanding the Kill Chain
The third-party breach attack followed the mechanism of a textbook OAuth supply chain escalation:
Stage 1 — Infostealer on a Context.ai employee
According to the report published by Hudson Rock, an employee working at Context.ai was compromised via Lumma Stealer in February 2026. Using malicious game exploit downloads (Roblox auto-farm scripts) as infection vectors, the attackers harvested credentials that included Google Workspace logins, Supabase, Datadog, and AuthKit keys.
Stage 2 — Context.ai OAuth app compromise
Context.ai operated a Google Workspace OAuth application (the "AI Office Suite") that allowed AI agents to perform actions across connected external applications. The attacker compromised OAuth tokens for Context.ai's consumer users.
Stage 3 — Vercel employee account takeover
A Vercel employee had installed Context.ai's Chrome extension / AI Office suite and granted Allow All permissions to their corporate Google Workspace. The compromised OAuth token let the attacker pivot directly into this employee's Vercel Google Workspace account.
Stage 4 — Internal enumeration
From the Workspace foothold, the attacker escalated into Vercel's internal environments. Environment variables not flagged as "sensitive" were stored in an insecure manner that allowed enumeration. The attacker moved with what Rauch described as "surprising velocity and in-depth understanding of Vercel's systems", likely AI-accelerated.
Stage 5 — Data exfiltration
The attacker claims to have obtained: internal databases, employee account access, GitHub tokens, npm tokens, API keys, source code fragments, and activity timestamps.
2. The Published IOC
Following the breach confirmation, Vercel published exactly one indicator of compromise:
OAuth App ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
This is the Context.ai Google Workspace OAuth application. Every Google Workspace administrator reading this should check for it immediately.
How to Check in Google Workspace Admin Console
For Workspace Admins:
Navigate to
admin.google.com→ Security → API controls → App Access Control → Manage Third-Party App AccessSearch for the OAuth client ID:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqjAlternatively, go to Reporting → Audit and investigation → OAuth log events
Filter by the app client ID above
Look for any authorization events, token grants, or API access from this app
For Individual Google Accounts:
Go to
myaccount.google.com/permissionsReview all third-party apps with access to your account
Look for anything related to Context.ai, "AI Office Suite," or unfamiliar AI productivity tools
Revoke access immediately if found
If this OAuth app appears anywhere in your logs, treat it as evidence of potential compromise and initiate a full incident response.
3. Immediate Triage: What to Rotate and in What Order
When Vercel environment variables are exposed, response speed matters. This section outlines a practical rotation order based on impact, urgency, and the need to fully invalidate exposed secrets across deployments.
Tier 1 — Rotate Within the Hour (Crown Jewels)
These credential types give an attacker direct access to production data or the supply chain:
Database connection strings stored as Vercel env vars (PostgreSQL, MySQL, MongoDB Atlas, PlanetScale, Supabase, Neon)
Cloud provider credentials — AWS access keys, GCP service account JSON, Azure client secrets
Payment processor keys — Stripe secret keys, PayPal client secrets
Auth/signing secrets — JWT signing keys, NextAuth
SECRET, session encryption keys, HMAC secretsGitHub Personal Access Tokens that were stored in Vercel env vars or that authorized the Vercel GitHub App installation
npm tokens — especially if you publish packages from Vercel CI/CD
Tier 2 — Rotate Within 24 Hours
Third-party SaaS API keys (SendGrid, Twilio, Resend, Postmark, Algolia, etc.)
CMS API tokens (Contentful, Sanity, Strapi, Prismic)
Analytics and monitoring tokens (Datadog, Sentry, LogRocket)
Vercel Deploy Hook URLs (these are full deploy triggers — an attacker can redeploy your app)
Vercel Deployment Protection tokens
Tier 3 — Rotate Within 72 Hours
Public-facing API keys that have backend restrictions (Google Maps, reCAPTCHA)
Feature flag service tokens (LaunchDarkly, Unleash, Split)
Any OAuth client secrets stored in env vars
Critical Detail: Rotating Keys Is Not Enough — You Must Redeploy
Vercel's architecture means that rotating an environment variable does not retroactively invalidate old deployments. Prior deployments continue using the old credential value until they are redeployed. Every credential rotation must be followed by:
A fresh
vercel --proddeploymentDeletion or disabling of all previous deployment artifacts
Verification that the old credential no longer works from the old deployment URL
4. Scanning Your Codebase for Leaked Secrets
This section covers a practical secret-scanning workflow for Vercel and Next.js projects, using multiple tools to identify leaked credentials, verify their status, and uncover hardcoded patterns that may have been missed in routine reviews.
4.1 Pull Your Vercel Environment Variables
For every Vercel project you own, pull the current environment variables locally and scan them:
# Pull env vars for each project
cd /path/to/your-project
vercel env pull .env.local
# If you have multiple environments
vercel env pull .env.production --environment production
vercel env pull .env.preview --environment preview
vercel env pull .env.development --environment development
4.2 Scan with GitGuardian (ggshield)
# Install
pip install ggshield
# Authenticate (requires GitGuardian API key — free tier available)
ggshield auth login
# Scan the pulled env files
ggshield secret scan path .env.local
ggshield secret scan path .env.production
# Scan your entire project directory
ggshield secret scan path . --recursive
# Scan your entire git history for any previously committed secrets
ggshield secret scan repo .
4.3 Scan with TruffleHog (Open Source)
TruffleHog's key differentiator is live credential verification — it actually tests whether discovered secrets are still active:
# Install
brew install trufflehog
# or
pip install trufflehog
# Scan git repo with verification (shows only ACTIVE credentials)
trufflehog git file://. --results=verified --fail
# Scan the filesystem (catches .env files, config files, etc.)
trufflehog filesystem . --results=verified
# Scan your entire GitHub org
trufflehog github --org=your-org-name --token=$GITHUB_TOKEN --results=verified
# Scan S3 buckets where build artifacts may live
trufflehog s3 --bucket=your-build-bucket --key=$AWS_ACCESS_KEY --secret=$AWS_SECRET_KEY
# Scan Docker images (if you ship containers from Vercel builds)
trufflehog docker --image=your-registry/your-image:latest
4.4 Scan with Gitleaks
Gitleaks is lighter and faster than TruffleHog for pure git-history scanning:
# Install
brew install gitleaks
# Scan current repo including all git history
gitleaks detect --source . -v
# Scan only staged changes (use as pre-commit hook)
gitleaks protect --staged
# Generate a JSON report for tracking
gitleaks detect --source . --report-format json --report-path gitleaks-report.json
# Scan with custom rules targeting Vercel-specific patterns
gitleaks detect --source . --config /path/to/custom-gitleaks.toml
4.5 Custom Gitleaks Config for Vercel/Next.js Projects
Create a gitleaks-vercel.toml file:
title = "Vercel/Next.js Secret Patterns"
[[rules]]
id = "vercel-api-token"
description = "Vercel API Token"
regex = '''(?i)(?:vercel|zeit)[\w\-]*(?:token|key|secret|api)[\s=:]+["\']?([A-Za-z0-9]{24,})["\']?'''
tags = ["vercel", "api-token"]
[[rules]]
id = "next-auth-secret"
description = "NextAuth Secret"
regex = '''NEXTAUTH_SECRET\s*=\s*["\']?([^\s"\']+)["\']?'''
tags = ["nextauth", "secret"]
[[rules]]
id = "database-url"
description = "Database Connection String"
regex = '''(?i)(?:DATABASE_URL|DB_URL|POSTGRES_URL|MYSQL_URL|MONGODB_URI)\s*=\s*["\']?((?:postgres|mysql|mongodb(?:\+srv)?):\/\/[^\s"\']+)["\']?'''
tags = ["database", "connection-string"]
[[rules]]
id = "vercel-deploy-hook"
description = "Vercel Deploy Hook URL"
regex = '''https:\/\/api\.vercel\.com\/v1\/integrations\/deploy\/[A-Za-z0-9_\-]+'''
tags = ["vercel", "deploy-hook"]
[[rules]]
id = "stripe-secret-key"
description = "Stripe Secret Key"
regex = '''(?:sk_live_|sk_test_)[A-Za-z0-9]{24,}'''
tags = ["stripe", "payment"]
[[rules]]
id = "jwt-secret-inline"
description = "JWT Secret Inline"
regex = '''(?i)(?:jwt[\._\-]?secret|signing[\._\-]?key)\s*[=:]\s*["\']?([A-Za-z0-9+/=]{16,})["\']?'''
tags = ["jwt", "auth"]
[[rles]]
id = "aws-access-key"
description = "AWS Access Key"
regex = '''(?:AKIA|ABIA|ACCA|ASIA)[0-9A-Z]{16}'''
tags = ["aws", "cloud"]
[[rules]]
id = "npm-token"
description = "NPM Access Token"
regex = '''(?:npm_[A-Za-z0-9]{36}|\/\/registry\.npmjs\.org\/:_authToken=.+)'''
tags = ["npm", "supply-chain"]
Run it:
gitleaks detect --source . --config gitleaks-vercel.toml -v
5. GitHub Search Queries (Dorks) for Exposed Secrets
If you need to check whether any of your organization's secrets have leaked to public GitHub repositories, use these search queries. Replace YOUR_ORG with your GitHub organization or username.
Vercel-Specific Patterns
# Vercel tokens in code
org:YOUR_ORG VERCEL_TOKEN
# Vercel API tokens in env files
org:YOUR_ORG filename:.env VERCEL
# Deploy hooks (these are weaponizable URLs)
org:YOUR_ORG "api.vercel.com/v1/integrations/deploy"
# Vercel project configuration with secrets
org:YOUR_ORG filename:vercel.json "env"
# Vercel CLI auth tokens
org:YOUR_ORG filename:.vercel "token"
Next.js Environment Variable Leaks
# .env files that should never be committed
org:YOUR_ORG filename:.env.local
org:YOUR_ORG filename:.env.production
org:YOUR_ORG filename:.env.production.local
# Server secrets accidentally prefixed as public
org:YOUR_ORG "NEXT_PUBLIC_" "secret"
org:YOUR_ORG "NEXT_PUBLIC_" "password"
org:YOUR_ORG "NEXT_PUBLIC_" "DATABASE"
# NextAuth secrets
org:YOUR_ORG NEXTAUTH_SECRET
org:YOUR_ORG NEXTAUTH_URL "secret"
# Database URLs in Next.js configs
org:YOUR_ORG filename:next.config DATABASE_URL
org:YOUR_ORG filename:next.config "postgres://"
org:YOUR_ORG filename:next.config "mongodb+srv://"
Generic High-Value Secret Patterns
# AWS credentials
org:YOUR_ORG AKIA
org:YOUR_ORG AWS_SECRET_ACCESS_KEY
org:YOUR_ORG filename:.env AWS_ACCESS_KEY
# Stripe
org:YOUR_ORG sk_live_
org:YOUR_ORG filename:.env STRIPE_SECRET
# npm tokens
org:YOUR_ORG "npm_" filename:.npmrc
org:YOUR_ORG "_authToken" filename:.npmrc
# GitHub tokens
org:YOUR_ORG ghp_
org:YOUR_ORG github_pat_
org:YOUR_ORG filename:.env GITHUB_TOKEN
# JWT and session secrets
org:YOUR_ORG JWT_SECRET filename:.env
org:YOUR_ORG SESSION_SECRET filename:.env
# Private keys
org:YOUR_ORG "BEGIN RSA PRIVATE KEY"
org:YOUR_ORG "BEGIN OPENSSH PRIVATE KEY"
org:YOUR_ORG "BEGIN EC PRIVATE KEY"
# Generic connection strings
org:YOUR_ORG "postgresql://" filename:.env
org:YOUR_ORG "redis://" filename:.env
org:YOUR_ORG "amqp://" filename:.env
Broader Internet OSINT (Google Dorks)
For checking whether environment files from your domain have been indexed by search engines:
# Exposed .env files on your domain
site:yourdomain.com filetype:env
# Exposed .env files on your Vercel deployments
site:*.vercel.app filetype:env
# Exposed Next.js build manifests (can reveal internal routes)
site:yourdomain.com "_next/static" "buildManifest"
# Git directories accidentally exposed
site:yourdomain.com "/.git/config"
# Source maps that might contain inlined secrets
site:yourdomain.com filetype:map "_next"
6. Auditing Your Next.js Application for Env Var Exposure
Beyond the Vercel platform breach, there are application-level risks specific to Next.js that every engineer should audit.
6.1 The NEXT_PUBLIC_ Prefix Problem
Any environment variable prefixed with NEXT_PUBLIC_ is inlined into the JavaScript bundle at build time and shipped to every browser that loads your application. Audit your codebase:
# Find all NEXT_PUBLIC_ references in your codebase
grep -rn "NEXT_PUBLIC_" --include="*.ts" --include="*.tsx" --include="*.js" --include="*.jsx" .
# Find NEXT_PUBLIC_ vars in your .env files
grep "NEXT_PUBLIC_" .env* 2>/dev/null
# Check what's actually in your production bundle
# Build first, then inspect
next build
grep -r "NEXT_PUBLIC_" .next/static/ 2>/dev/null | head -50
Red flags to look for:
NEXT_PUBLIC_DATABASE_URL— database connection strings should never be public
NEXT_PUBLIC_API_SECRET— anything with "secret" should not have this prefix
NEXT_PUBLIC_STRIPE_SECRET_KEY— payment secret keys exposed to the browser
NEXT_PUBLIC_JWT_SECRET— authentication signing keys in the client bundle
NEXT_PUBLIC_ADMIN_*— admin credentials or endpoints exposed to all users
6.2 Server-Side Secrets Leaking via SSG/ISR
Even without the NEXT_PUBLIC_ prefix, server-side environment variables can leak into statically generated HTML if referenced in getStaticProps or page-level components that render during build. Check:
# Search built HTML for potential secret patterns
find .next/server -name "*.html" -exec grep -l "sk_live\|AKIA\|postgres://\|mongodb+srv\|BEGIN.*PRIVATE" {} \;
# Check for secrets in server-rendered page data
find .next/server -name "*.json" -exec grep -l "password\|secret\|token\|apiKey" {} \;
# Inspect Next.js data payloads (__NEXT_DATA__)
find .next -name "*.html" -exec grep -o "__NEXT_DATA__.*</script>" {} \; | head -20
6.3 Source Maps in Production
Source maps can expose your entire server-side codebase including inlined secrets:
# Check if source maps are being generated in production
grep -r "productionBrowserSourceMaps\|devtool.*source-map" next.config.* 2>/dev/null
# Check if source maps are accessible on your live site
curl -s -o /dev/null -w "%{http_code}" https://yourdomain.com/_next/static/chunks/main.js.map
# If 200, your source maps are publicly accessible — disable immediately
In next.config.js, ensure:
module.exports = {
productionBrowserSourceMaps: false, // This MUST be false in production
}
7. Auditing OAuth Apps and Third-Party Integrations
The Vercel breach is fundamentally an OAuth supply chain attack. One employee granting Allow All permissions to a third-party AI tool was the entire initial access vector.
7.1 Google Workspace OAuth Audit
# Using Google Workspace Admin SDK (requires admin access)
# List all OAuth tokens issued in your domain
# Via Admin Console:
# 1. admin.google.com → Security → API Controls → App Access Control
# 2. Review every app listed — especially those with:
# - "Allow All" or broad scopes
# - Access to Gmail, Drive, Calendar simultaneously
# - Apps you don't recognize
# - AI productivity tools (these are the new attack surface)
Key scopes to flag as high-risk:
<https://www.googleapis.com/auth/gmail.modify> — can read/send email
<https://www.googleapis.com/auth/drive> — full Drive access
<https://www.googleapis.com/auth/admin.directory.user> — can manage users
<https://mail.google.com/> — full Gmail access (the nuclear scope)
Any scope from an AI agent/assistant tool
7.2 Vercel Integration Audit
# List your Vercel integrations via CLI
vercel integrations list
# Check the Vercel dashboard:
# Settings → Integrations → Review every connected service
# Pay attention to:
# - GitHub integration (what repos does it have write access to?)
# - Linear integration (reported as disproportionately impacted)
# - Any custom integrations
# Review Vercel activity logs for the exposure window
vercel logs --since 2026-04-01
# Or via the dashboard: vercel.com/[team]/~/activity
# Filter for:
# - env.read, env.list API calls
# - Deployment creations from unexpected sources
# - Integration permission changes
# - New team member additions
7.3 GitHub Integration Audit
# Check which apps have access to your GitHub org
# Go to: github.com/organizations/YOUR_ORG/settings/installations
# Via API:
curl -H "Authorization: Bearer $GITHUB_TOKEN" \
https://api.github.com/orgs/YOUR_ORG/installations
# Check your org audit log for the exposure window
# github.com/organizations/YOUR_ORG/settings/audit-log
# Filter for:
# - integration_installation events
# - repo.access events
# - workflow_dispatch (manual action triggers)
# - release and tag creation events you didn't make
# Check GitHub Actions history for unexpected runs
gh run list --repo YOUR_ORG/YOUR_REPO --limit 50 --json event,conclusion,createdAt
# Check for unexpected branches
git branch -r --list 'origin/*' | while read branch; do
echo "$branch: $(git log -1 --format='%ai %an' "$branch")"
done
7.4 NPM Supply Chain Verification
If you publish npm packages and your publish workflow touches Vercel infrastructure at any point:
# Check npm publish history for unexpected versions
npm view YOUR_PACKAGE time --json
# Compare the published tarball against your git tag
npm pack YOUR_PACKAGE@latest
tar -xzf YOUR_PACKAGE-*.tgz
diff -r package/ /path/to/your/git/repo/dist/
# Check npm access tokens
npm token list
# Revoke any tokens you can't account for
npm token revoke $TOKEN_ID
# Enable 2FA on publish if not already
npm profile enable-2fa auth-and-writes
# Check if your package has been flagged by Socket.dev
npx socket analyze YOUR_PACKAGE
8. Third-Party Risk: Why Context.ai Should Have Been on Your Radar
The entire Vercel breach traces back to a single vendor: Context.ai, a small AI productivity tool that most security teams had never heard of, let alone assessed. This is the fundamental gap that Third-Party Risk Management (TPRM) is supposed to close, and where most TPRM programs fail.
Context.ai was not a vendor that went through procurement. It was a Chrome extension that an employee installed on their own. It requested, and was granted, `Allow All` OAuth permissions to a corporate Google Workspace. No security review. No scope restriction. No ongoing monitoring.
What a TPRM Platform Should Have Caught
This is the kind of incident a mature TPRM program is supposed to catch earlier. Third-party risk today is not limited to annual reviews or static questionnaires. It requires continuous discovery of shadow SaaS, real-time monitoring of vendor posture, visibility into OAuth permissions, and contextual escalation when external signals suggest a vendor may be drifting into active risk.
Shadow SaaS discovery
The first step is knowing that an unknown third-party tool, like Context.ai, exists in your environment at all. RiskProfiler's TPRM module discovers third-party services connected to your organization by correlating DNS records, OAuth grant logs, SSO authentication events, and browser extension telemetry. When an employee authorizes a new AI tool against your Google Workspace, that event should surface in your TPRM dashboard as an unvetted vendor, not buried in a Workspace admin log that nobody checks.
Vendor security posture scoring
RiskProfiler continuously assesses the external security posture of your third-party vendors. For Context.ai, that assessment would have flagged: a small team with limited security maturity, no SOC 2 or ISO 27001 certification, broad OAuth scope requirements (`Allow All`), a Chrome extension with access to sensitive Workspace data, and an AWS infrastructure footprint that, as we now know, was itself breached a month before Vercel was compromised. These signals compound into a risk score that should have triggered review before the tool was ever authorized.
Continuous vendor monitoring
Modern third-party security posture can not be managed with point-in-time questionnaires. Context.ai's Chrome extension was removed from the Chrome Web Store on 27 March, three weeks before Vercel disclosed the breach. RiskProfiler's continuous monitoring would have flagged that removal as a risk signal (vendors pulling their own extensions from public marketplaces is a strong indicator of a security event) and escalated it to the vendor management team.
OAuth scope analysis
RiskProfiler maps the OAuth scopes each third-party vendor holds across your organization. An AI productivity tool holding `mail.google.com` (full Gmail access), `drive` (full Drive access), and broad Workspace admin scopes in a single grant is a textbook over-permission pattern. Our platform flags these as critical findings and recommends scope reduction or vendor removal.
The lesson is not "don't use AI tools." The lesson is: every OAuth grant to a third-party SaaS is a trust decision with supply chain consequences. If your TPRM program doesn't discover, assess, and continuously monitor these grants, you are flying blind — and the next Context.ai is already installed in your environment.
9. Open Source Tools Reference
Tool | Purpose | Install | Key Command |
TruffleHog | Multi-source secret scanning with live verification |
|
|
Gitleaks | Fast git-history secret scanning |
|
|
ggshield | GitGuardian CLI (550+ secret types) |
|
|
detect-secrets | Yelp's baseline-oriented scanner |
|
|
git-secret-scanner | Combined TruffleHog + Gitleaks org scanner | GitHub: padok-team/git-secret-scanner |
|
github-dork.py | Automated GitHub dork search | GitHub: techgaun/github-dorks |
|
Nuclei | Template-based vulnerability scanner |
|
|
Socket CLI | npm/PyPI supply chain analysis |
|
|
Nudge Security | SaaS/OAuth app discovery | Discover shadow SaaS and OAuth grants | |
Semgrep | Static analysis with secret detection rules |
|
|
10. Cloud Provider Log Queries
If cloud or payment credentials were exposed, review logs immediately to check for misuse, persistence, or data access during the exposure window.
10.1 AWS CloudTrail
If AWS credentials were stored in Vercel env vars, check CloudTrail for unauthorized usage. The exposure window is conservatively from 1 April 2026 to the present day.
# Search for API calls from unexpected IPs or user agents
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIA_YOUR_KEY \
--start-time "2026-04-01T00:00:00Z" \
--end-time "2026-04-21T23:59:59Z" \
--output json
# Check for IAM enumeration (common first move after key theft)
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetCallerIdentity \
--start-time "2026-04-01T00:00:00Z"
# Look for S3 data exfiltration
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=GetObject \
--start-time "2026-04-01T00:00:00Z" \
--max-results 100
# Check for key creation (persistence mechanism)
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=EventName,AttributeValue=CreateAccessKey \
--start-time "2026-04-01T00:00:00Z"
10.2 GCP Audit Logs
# Search for unusual API activity from service accounts
gcloud logging read \
'protoPayload.authenticationInfo.principalEmail="YOUR_SERVICE_ACCOUNT" AND
timestamp>="2026-04-01T00:00:00Z"' \
--project=YOUR_PROJECT \
--format=json \
--limit=500
# Check for data access from unusual IPs
gcloud logging read \
'protoPayload.requestMetadata.callerIp!="YOUR_KNOWN_IP" AND
timestamp>="2026-04-01T00:00:00Z"' \
--project=YOUR_PROJECT
10.3 Stripe Dashboard
If Stripe keys were in non-sensitive Vercel env vars:
Go to
dashboard.stripe.com→ Developers → API KeysRoll your secret key immediately
Check Developers → Events for any suspicious API calls during the exposure window
Look for: charge creation, customer export, balance transfers, account updates
Check Developers → Webhooks for newly created webhook endpoints (persistence mechanism)
11. Detection Queries for SIEM/Log Platforms
These detection queries help you quickly spot suspicious activity linked to potentially exposed credentials across major SIEM and log platforms.
Splunk
# Detect Vercel-related credential usage from unusual sources
index=cloudtrail sourcetype=aws:cloudtrail
userIdentity.accessKeyId IN ("AKIA_YOUR_EXPOSED_KEY_1", "AKIA_YOUR_EXPOSED_KEY_2")
| stats count by sourceIPAddress, eventName, userAgent
| where sourceIPAddress!="YOUR_KNOWN_CIDR"
# Detect Google Workspace OAuth grants for the compromised app
index=gworkspace sourcetype=google:workspace:activity
events.name="authorize"
events.parameters.client_id="110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj"
| table _time, actor.email, events.parameters.app_name, events.parameters.scope
Datadog
# Search for API calls using potentially compromised credentials
@evt.name:AwsApiCall @usr.access_key_id:AKIA_YOUR_KEY
-@network.client.ip:YOUR_KNOWN_IP
| stats count by @network.client.ip, @evt.name
Elastic/OpenSearch
{
"query": {
"bool": {
"must": [
{"match": {"userIdentity.accessKeyId": "AKIA_YOUR_KEY"}},
{"range": {"eventTime": {"gte": "2026-04-01T00:00:00Z"}}}
],
"must_not": [
{"match": {"sourceIPAddress": "YOUR_KNOWN_IP"}}
]
}
}
}
Dark Web and Credential Leak Monitoring
The SIEM queries above detect usage of compromised credentials against your own infrastructure. But you also need to know when your credentials appear on dark web marketplaces, paste sites, stealer log dumps, and Telegram channels.
The Vercel breach data appeared on BreachForums hours before Vercel published its security bulletin. Trend Micro's analysis identified a nine-day gap between the earliest evidence of credential exposure (an OpenAI leaked-credential notification on 10 April) and Vercel's public disclosure on 19 April. During that window, credentials were circulating in the hidden channels with no customer notification.
This is exactly the detection gap that RiskProfiler's Digital Risk Protection (DRP) module is designed to close:
Stealer log monitoring
RiskProfiler’s agentic AI-powered dark web threat intelligence module, KnyX Dark Web AI, ingests and correlates stealer log datasets (Lumma, RedLine, Vidar, Raccoon, and others) against your organization's email domains, IP ranges, and asset inventory. The Vercel breach originated from a Lumma Stealer infection on a Context.ai employee in February 2026. Utilizing proactive third-party risk management programs like RiskProfiler, similar stealer log hits can be flagged well in advance, helping businesses identify such cascading breaches before they can escalate.
BreachForums and dark web marketplace monitoring
When the threat actor posted Vercel data on BreachForums with a $2M asking price, RiskProfiler's dark web crawlers identified and parsed the listing, extracted the claimed data categories (GitHub tokens, npm tokens, API keys, source code), and correlated them against customer asset inventories. Organizations with active dark web monitoring modules received alerts within hours of the listing, not days, allowing companies to contain the breach before it can be manipulated for unauthorized access.
Credential leak correlation
RiskProfiler cross-references leaked credentials against your known asset inventory with its attack path mapping feature. If a database connection string for `your-company-db.us-east-1.rds.amazonaws.com` appears in a leak, and that hostname is in your external attack surface management inventory, the alert flags the incident with full context, the leaked credential, the asset it unlocks, and the blast radius if exploited.
Automated provider notifications
As Trend Micro noted, services like OpenAI, AWS, GitHub, and Stripe operate their own leaked-credential detection systems. RiskProfiler aggregates these notifications alongside its own detection to ensure that no signal is missed when a credential surfaces in the wild.
12. Hardening Checklist: Preventing the Next One
Once containment is complete, the focus shifts to reducing the chance of the same exposure happening again. This hardening checklist covers the key controls across Vercel, Google Workspace, Next.js, and your supply chain to help close common gaps and strengthen long-term resilience.
Vercel-Specific
☑︎ Mark all environment variables containing secrets as "sensitive" in Vercel (now defaults to on for new vars)
☑︎ Audit and remove any unused environment variables
☑︎ Enable Deployment Protection at Standard or above
☑︎ Rotate all Deploy Hook URLs
☑︎ Review and minimize GitHub integration repository scope
☑︎ Remove any Linear or other third-party integrations you aren't actively using
☑︎ Enable 2FA on your Vercel account with an authenticator app or passkey
☑︎ Review team membership — remove anyone who shouldn't have access
Google Workspace
☑︎ Audit all OAuth app grants across your organization
☑︎ Implement an OAuth app allowlist — block unapproved apps from accessing Workspace data
☑︎ Enforce least-privilege scopes for approved apps
☑︎ Set up alerting on new OAuth grants, especially those with broad scopes
☑︎ Review and restrict the ability of users to grant OAuth access to enterprise accounts
☑︎ Enable Google Workspace DLP rules to flag credential-like patterns in email and Drive
Next.js Application
☑︎ Audit every NEXT_PUBLIC_ variable — ensure no secrets are prefixed this way
☑︎ Use the server-only package to enforce server/client boundaries
☑︎ Disable productionBrowserSourceMaps in production
☑︎ Add Gitleaks as a pre-commit hook to prevent secrets from entering git history
☑︎ Add TruffleHog to your CI pipeline with --results=verified --fail ☑︎ Implement the NEXTJS_SAFE_NEXT_PUBLIC_ENV_USAGE conformance rule
Supply Chain
☑︎ Enable npm 2FA for auth-and-writes
☑︎ Pin GitHub Actions to commit SHAs, not tags
☑︎ Audit npm view <your-package> time --json for unexpected publish events
☑︎ Compare published tarballs to your git tags
☑︎ Use Socket.dev or Snyk to monitor your dependency tree for compromised packages
External Attack Surface and Vendor Risk (Continuous)
☑︎ Deploy an EASM platform to continuously discover and fingerprint all Vercel-hosted assets across your organization
☑︎ Monitor for exposed `.env` files, source maps, and `__NEXT_DATA__` payloads at the edge
☑︎ Enable dark web and credential leak monitoring against your domains and known asset hostnames"
☑︎ Detect dangling DNS records (CNAMEs to `cname.vercel-dns.com` for deleted projects)
☑︎ Inventory all third-party SaaS with OAuth access to your Google Workspace / Azure AD / Okta
☑︎ Score and continuously monitor the security posture of every vendor with OAuth grants
☑︎ Set up alerts for vendor signals: Chrome extension removals, domain changes, breach disclosures
☑︎ Monitor stealer log feeds for credentials associated with your organization and your critical vendors
☑︎ Correlate leaked credentials with your asset inventory to assess blast radius automatically
13. The Bigger Picture: OAuth Is the New Lateral Movement
This breach is not an isolated incident. It fits a 2026 convergence pattern where attackers consistently target developer-stored credentials across CI/CD platforms, package registries, OAuth integrations, and deployment platforms. The LiteLLM and Axios supply chain attacks earlier this year followed similar patterns.
The core lesson is architectural. OAuth tokens granted to third-party SaaS, especially AI productivity tools, are high-value credentials that need to be managed with the same rigor as SSH keys or cloud IAM roles. An employee installing a Chrome extension and clicking "Allow All" on their corporate Google account is now a top-tier initial access vector.
Until the industry treats OAuth tokens as the high-value credentials they are, we will keep reading the same breach report with different vendor names swapped in.
Where RiskProfiler Fits in This New Reality
The Vercel breach crossed three domains that are traditionally handled separately across most security programs. RiskProfiler, with its agentic AI-powered consolidated external threat intelligence platform, unifies the scanning, detection, and prioritization across these siloed modules, enabling contextual, comprehensive visibility and controlled remediation.
External Attack Surface Management (EASM)
RiskProfiler’s external attack surface management capability gives you an adaptive view of the internet-facing assets that enable fast and effective incident containment and remediation. In the case of this breach, RiskProfiler helps you keep track of all Vercel-hosted applications, preview and production deployments, exposed APIs, certificates, subdomains, cloud-connected services, and ownership relationships. In a similar case, that visibility is what lets teams quickly identify which environments are exposed, which deployments are still reachable, and where attacker-accessible blind spots may still exist. KnyX Recon AI, the agentic AI-powered EASM module, continuously monitors these external assets and maps them contextually, reducing the inventory gaps that slow triage and containment.
Digital Risk Protection (DRP)
The real danger is not just the initial exposure, but what happens in the gap before formal disclosure. RiskProfiler’s Brand Risk Protection, Identity Intelligence, and Dark Web Monitoring capabilities are built to detect leaked credentials, access codes, malicious brand references, look-alike domains, phishing kits, exposed tokens, code, and sensitive records across open, deep, and dark web sources. That means security teams can identify when leaked Vercel-linked secrets, source fragments, or impersonation infrastructure begin circulating externally, then move faster on secret rotation, takedowns, and customer-protection workflows before attackers scale abuse.
Third-Party Risk Management (TPRM)
This incident also shows why third-party exposure can no longer be treated as a separate workflow from breach response. RiskProfiler’s Third-Party Risk Management capability continuously monitors supply chain connections and extended vendor relationships, combining adaptive vendor risk questionnaires, threat scoring, and ongoing exposure scanning to surface suppliers, SaaS tools, and integrations that increase blast radius. In a case like the Vercel breach, that helps teams assess which vendors had excessive access, which shadow tools introduced avoidable risk, and which third-party relationships now require reassessment, restriction, or remediation.
RiskProfiler unifies all three on a single platform, powered by our agentic AI engine KnyX, which correlates signals across your attack surface, vendor ecosystem, and dark web exposure to surface the alerts that matter, filtering out the noise that doesn't.
If the Vercel breach has taught you anything, it should be this: the blast radius of a modern supply chain attack spans your deployment platform, your vendor ecosystem, and the dark web — simultaneously. Your detection and response capability needs to span all three as well.
Request a demo today or explore the platform to understand how RiskProfiler unifies your threat management workflow.
This document will be updated as Vercel releases additional IOCs and as the Mandiant investigation progresses. Last updated: 21 April 2026.
Jump to
Share Article
We Have Answers!
Explore our FAQ to learn more about how RiskProfiler can help safeguard your digital assets and manage risks efficiently.
Is enterprise risk management only for large organizations?
Enterprise risk management is not limited to large organizations; it scales based on the complexity and risk exposure of the business. Smaller organizations implement ERM using simplified processes for risk identification and assessment, while larger enterprises deploy advanced ERM components to manage diverse and interconnected risks.
Who is responsible for enterprise risk management in a company?
Enterprise risk management is led by senior leadership, typically including the Chief Risk Officer, with accountability distributed across business units and risk owners. Effective ERM requires coordination between executives, functional heads, and governance teams to ensure that organization-wide risk management is consistently applied.
What are the main challenges of implementing enterprise risk management?
The primary challenges include fragmented risk data, a lack of standardized processes for risk, and limited integration between departments. Organizations also face difficulty in making risk measurable, aligning ERM components with strategy, and ensuring consistent adoption across all business functions.
What is the difference between IRM and ERM?
Integrated Risk Management (IRM) focuses on coordinating risk management processes and technologies, while Enterprise Risk Management focuses on managing the entire risk portfolio at a strategic level. ERM defines the organization’s approach to managing risk and decision-making, while IRM supports execution by connecting systems, data, and workflows across ERM components.
What are the advantages and disadvantages of enterprise risk management?
Enterprise risk management improves risk visibility, decision-making, and governance by integrating risk assessment across the organization. However, it requires high implementation cost, structured processes, and accurate data, and may introduce complexity that can slow decision-making and coordination.
What is the purpose of enterprise risk management?
The purpose of enterprise risk management is to identify, assess, and control risks across the organization to protect business value and support decision-making. It aligns risk management with objectives, improves risk visibility, and ensures risks are managed using structured processes and defined risk tolerance levels.
Latest Insights
Stay informed with expert perspectives on cybersecurity, attack surface management,
and building digital resilience.
Enterprise-Grade Security & Trust
Specialized intelligence agents working together toprotect your organization
Ready to Transform
Your Threat Management?
Join hundreds of security teams who trust KnyX to cut through the noise and focus on what matters most.
Book a Demo Today


