TL;DR — 28.65 million secrets leaked on GitHub in 2025, and AI-assisted commits leak at double the baseline rate. This 6-layer checklist covers npm config, automation hooks, OAuth hygiene, secrets management, agent permissions, and incident response. Total setup: under 30 minutes. Jump to Layer 1 →

📊 What the 6 layers cover:

  • Layer 1: .npmrc config blocks 80% of supply chain attacks (30 seconds)
  • Layer 2: PreToolUse hooks intercept unsafe npm install commands (5 minutes)
  • Layer 3: OAuth audit revokes over-permissioned AI tool connections (5 minutes)
  • Layer 4: .claudeignore + pre-commit scanning stops secret leaks (10 minutes)
  • Layer 5: Agent deny rules block destructive commands (5 minutes)
  • Layer 6: Incident response runbook for when a token leaks (5 minutes to read)

Here’s a typical AI-assisted workflow with the attack surface marked:

You: "Add HTTP client and deploy"
Agent: npm install axios ← supply chain (Layer 1-2)
Agent: reads .env for config ← secret leak (Layer 4)
Agent: git push origin main ← destructive command (Layer 5)
You: granted OAuth to 6 tools ← token sprawl (Layer 3)
You: token leaks to GitHub ← incident response (Layer 6)

In 2025, 28.65 million hardcoded secrets were added to public GitHub repositories, a 34% year-over-year increase (GitGuardian, 2026). The same report found that AI-assisted commits leaked secrets at 3.2%, more than double the 1.5% baseline. Your existing security practices have blind spots for AI-specific vectors. This checklist closes them.


Why Does AI Coding Need Its Own Security Checklist?

AI coding tools introduce attack surface that traditional security checklists don’t cover. Commits co-authored by AI tools leaked secrets at 3.2% compared to a 1.5% baseline across all public GitHub in 2025 (GitGuardian, 2026). That’s not a tooling bug. It’s a category of risk that didn’t exist two years ago.

The gap between “shipping faster” and “shipping safely” is widening. Pull requests per author increased 20% year-over-year, but incidents per pull request rose 23.5% and change failure rates climbed roughly 30% (Kusari, 2026). More code, more bugs, more exposure.

VectorTraditional devAI-assisted dev
Package installHuman reviews depsAgent installs at machine speed
Secret exposureDeveloper manages .envAgent reads files it shouldn’t
OAuth tokensFew manual grantsDozens of AI tool connections
Destructive commandsDeveloper types carefullyAgent runs rm, git push --force
Incident responseSameSame, but incidents happen faster

Key insight: GitGuardian’s 2026 report found that AI-assisted commits on public GitHub leaked secrets at 3.2%, more than double the 1.5% baseline rate (GitGuardian State of Secrets Sprawl 2026). The risk isn’t that AI tools are malicious. The risk is that they move faster than your security controls.

Traditional checklists cover code review, dependency audits, and secret scanning. They don’t cover OAuth token sprawl across AI tools, agent permission boundaries, or the fact that your AI agent can run rm -rf if you haven’t configured deny rules. This checklist does.


Layer 1 — How Do You Lock Down npm in 30 Seconds?

Four lines in ~/.npmrc block the most common npm supply chain attack vector. In 2025, over 454,600 new malicious packages were identified across registries, a 75% year-over-year increase (Sonatype, 2026). The dominant payload mechanism is postinstall lifecycle scripts. These four lines disable them.

~/.npmrc
ignore-scripts=true
save-exact=true
audit-level=moderate
fund=false

What each line does:

  • ignore-scripts=true disables postinstall scripts globally. The axios RAT, Shai-Hulud worm, and event-stream Bitcoin stealer all needed this to execute.
  • save-exact=true pins exact versions. No more ^1.14.0 resolving to a hijacked 1.14.1.
  • audit-level=moderate fails npm install on known CVEs instead of burying them in warnings.
  • fund=false removes funding noise so security warnings are visible.

Verify it works:

Terminal window
npm config get ignore-scripts save-exact audit-level fund
# Expected: true true moderate false

Key insight: Over 454,600 new malicious packages were identified in 2025, bringing the cumulative total to over 1.23 million across npm, PyPI, and other registries (Sonatype State of the Software Supply Chain 2026). Disabling lifecycle scripts in ~/.npmrc neutralizes the dominant payload mechanism for all of them.

Deep dive: The 30-Second npm Defense Every Vibe Coder Needs covers edge cases, native packages that need npm rebuild, and the @lavamoat/allow-scripts allow-list for teams.


Layer 2 — How Do You Automate npm Audits with Hooks?

A Claude Code PreToolUse hook intercepts every npm install before the command executes and blocks it if --ignore-scripts is missing. This catches the case where your agent (or a dependency) overrides your .npmrc defaults. 19.7% of packages proposed by AI code assistants don’t even exist, enabling “slopsquatting” attacks (OpenSSF, 2026).

Minimal hook config in .claude/settings.json:

{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": ".claude/hooks/npm-audit-check.sh",
"timeout": 30
}
]
}
]
}
}

The hook script checks three things: (1) is --ignore-scripts present, (2) does the package have known CVEs via npm audit, and (3) does it meet a minimum weekly download threshold. If any check fails, the install is blocked before npm resolves dependencies.

Add CLAUDE.md rules as a fallback for when hooks aren’t available:

## npm Security Rules
- ALWAYS use --ignore-scripts with npm install
- ALWAYS use --save-exact to pin versions
- NEVER install packages with < 1,000 weekly downloads without asking
- NEVER install packages first published within 30 days without asking

Key insight: 19.7% of packages proposed by AI code assistants don’t exist on any registry, creating “slopsquatting” opportunities where attackers register those hallucinated names as malicious packages (OpenSSF Best Practices Working Group, 2026). Process-level hooks catch these before installation.

Deep dive: Stop npm Supply Chain Attacks with Claude Code Hooks covers the full three-layer setup: PreToolUse audit, PostToolUse lockfile diff, and CLAUDE.md version pinning.

Get weekly Claude Code security tips — One email per week. Hooks, CLAUDE.md patterns, and real attack breakdowns. Subscribe to AI Developer Weekly →


Layer 3 — What OAuth Permissions Have You Granted to AI Tools?

Every AI tool connected to your Google, GitHub, or Slack holds OAuth tokens that bypass MFA entirely. In 2025, 1.27 million AI-service secrets were exposed on GitHub, an 81% year-over-year surge (GitGuardian, 2026). If one of those vendors gets breached, your tokens are the attack vector.

This already happened. In the Vercel breach, an attacker stole Context.ai’s OAuth tokens via infostealer malware, then pivoted into a Vercel employee’s Google Workspace, then into Vercel’s internal systems. The initial vector was a single AI tool’s OAuth grant.

Run this 5-minute audit right now:

  1. Google: myaccount.google.com/permissions — revoke any AI tool you don’t actively use
  2. GitHub: github.com/settings/applications — check both OAuth Apps and GitHub Apps tabs
  3. Slack: Workspace Settings → Manage Apps — remove unused integrations
  4. npm: npm token list — revoke any token you don’t recognize

For every tool you keep, check the scope. An AI coding assistant should never need full Google Drive access or GitHub admin:org permissions.

Key insight: GitGuardian found 1.27 million AI-service secrets exposed on public GitHub in 2025, an 81% year-over-year surge, plus 24,008 unique secrets in MCP configuration files alone (GitGuardian State of Secrets Sprawl 2026). OAuth scope creep from AI tools is now a top-3 credential exposure vector.

Deep dive: OAuth Supply Chain Attacks: Your AI Tools Are the New Vector covers the full Vercel/Context.ai attack chain and a response playbook for vendor breaches.


Layer 4 — How Do You Stop Secrets from Leaking into Commits?

Use .claudeignore to block AI agents from reading sensitive files, and pre-commit hooks to catch anything that slips through. AI agents don’t know which files contain secrets unless you tell them. 64% of secrets leaked in 2022 were still valid and exploitable as of January 2026 (GitGuardian, 2026).

Create a .claudeignore file in your project root:

.claudeignore
.env
.env.*
*.pem
*.key
credentials.json
google-services.json
local.properties
config/secrets/

Add pre-commit secret scanning with gitleaks:

Terminal window
# Install gitleaks
brew install gitleaks
# Add pre-commit hook
cat > .git/hooks/pre-commit << 'HOOK'
#!/bin/bash
gitleaks protect --staged --verbose
HOOK
chmod +x .git/hooks/pre-commit

For stronger isolation, use git worktrees. Clone a clean copy of your repo without any secret files, and point Claude Code at the worktree instead of your main checkout. The worktree has no .env, no credentials, no way to leak what it can’t see.

Key insight: 64% of valid secrets leaked on public GitHub in 2022 were still active and exploitable as of January 2026 (GitGuardian State of Secrets Sprawl 2026). Once a secret leaks, it stays leaked. Prevention is the only strategy that works at scale.

Deep dive: How I Protect Sensitive Code While Using Claude Code on Real Projects covers four strategies ranked by security vs. daily sustainability.


Layer 5 — What Permissions Should Your AI Agent Actually Have?

Configure explicit deny rules in .claude/settings.json for destructive commands and sensitive file paths. Default Claude Code permissions are too broad. I learned this the hard way when rm Claude.md bypassed three layers of defense because they were all configured to block rm -rf, not rm.

{
"permissions": {
"deny": [
"Bash(rm *)",
"Bash(git push --force*)",
"Bash(git reset --hard*)",
"Bash(chmod 777*)",
"Read(.env*)",
"Read(credentials*)",
"Read(*.pem)",
"Read(*.key)"
]
}
}

The deny list should cover:

  • Destructive file operations: rm, rmdir, file overwrites
  • Dangerous git commands: push --force, reset --hard, clean -f
  • Secret file reads: .env, credential files, private keys
  • Permission escalation: chmod 777, sudo

Test your deny list after configuring. Run the exact commands you’re blocking and confirm Claude Code refuses them. The number of security layers doesn’t matter if any of them has a gap.

Key insight: Claude Code’s permission system grants filesystem and shell access by default when users approve commands. Without a configured deny list in settings.json, the agent can read credential files, write to any path, and execute destructive shell commands. Explicit deny rules are the only guaranteed protection.

Deep dive: I Set Up 3 Layers of Defense in Claude Code. It Deleted My File Anyway. covers why testing your actual threat model matters more than stacking layers.


Layer 6 — What’s Your Incident Response Plan When a Token Leaks?

Revoke first, scope blast radius second, rotate downstream credentials third. Automated scanners harvest leaked secrets from public GitHub within 5 minutes. You have one hour before the damage compounds. Here’s the compressed runbook:

Minute 0-5: Revoke

Terminal window
# npm token
npm token revoke <token-id>
# GitHub PAT
# → github.com/settings/tokens → Delete
# AWS access key
aws iam deactivate-access-key --access-key-id <key>

Minute 5-15: Scope blast radius

  • What could the token access? List every service, repo, and secret it could reach.
  • Export audit logs before they rotate out (GitHub, Google Workspace, npm).

Minute 15-30: Rotate downstream

  • If an npm token could read CI secrets, those CI secrets are compromised too.
  • If a GitHub PAT had repo access, every secret in those repos is exposed.
  • Follow the cascade. Rotate everything the leaked token could touch.

Minute 30-60: Harden

  • Enable GitHub secret scanning push protection.
  • Add the pre-commit gitleaks hook from Layer 4.
  • Switch from long-lived tokens to short-lived alternatives (GitHub fine-grained PATs, npm granular tokens).

Key insight: GitGuardian tracked credentials leaked in 2022 and found that 64% remained valid and exploitable as of January 2026. The remediation gap means most leaked secrets are never revoked. A 60-minute response window is the difference between a contained incident and a cascading breach.

Deep dive: Your npm Token Got Leaked. Here’s Your Next 60 Minutes. covers the full runbook with exact commands for npm, GitHub, Vercel, and AWS.


How Do You Know If You’re Actually Protected?

Run this 5-minute self-audit quarterly. One check per layer, each produces a pass/fail signal:

Terminal window
# Layer 1: npm config
npm config get ignore-scripts save-exact audit-level
# Expected: true true moderate
# Layer 2: Hook installed (Claude Code)
cat .claude/settings.json | grep -c "npm-audit"
# Expected: 1 or more
# Layer 4: Pre-commit secret scan
gitleaks protect --staged --verbose
# Expected: no leaks found
# Layer 5: Deny list configured
cat .claude/settings.json | grep -c "deny"
# Expected: 1 or more

For Layer 3 (OAuth), manually check your permissions pages quarterly:

For Layer 6 (incident response), do a tabletop exercise. Pick a random token from npm token list and walk through the revoke-scope-rotate-harden sequence without actually revoking it. If you can’t complete the exercise in 15 minutes, your runbook needs work.

Try it now: Run npm config get ignore-scripts save-exact audit-level in your terminal. If you don’t see true true moderate, open ~/.npmrc and add the four lines from Layer 1. That’s 30 seconds of work that blocks the most common npm attack vector. Then pick one more layer and do it today.

This checklist is the floor, not the ceiling. Each layer has a deep-dive post with the full setup, edge cases, and the real incidents that motivated it. Start at Layer 1 and work down. Under 30 minutes for all six.

Want the next security post when it ships? One email per week, no fluff. Join AI Developer Weekly →


FAQ

Do I need all 6 layers?

Layer 1 (npmrc) and Layer 4 (secrets) are non-negotiable. The others depend on your threat model. Solo dev on side projects? Layers 1-4. Shipping production code with a team? All 6.

Does this checklist work for Cursor, Copilot, and Windsurf too?

Layers 1, 3, 4, and 6 are tool-agnostic. Layers 2 and 5 are Claude Code-specific (hooks and settings.json), but every AI coding agent has equivalent permission controls. Check your tool’s documentation for the matching config.

How long does the full setup take?

Under 30 minutes if you follow the layers in order. Layer 1 is 30 seconds. Layer 2 is 5 minutes. Layer 3 depends on how many AI tools you’ve connected, but the audit itself takes 5 minutes. Layers 4-6 are 5 minutes each.

What’s the single highest-impact action?

Add ignore-scripts=true to ~/.npmrc. One line, 30 seconds, blocks the dominant npm malware vector. The axios RAT, Shai-Hulud worm, and event-stream all relied on postinstall scripts to execute.

Should I stop using AI coding tools because of these risks?

No. The productivity gains are real. But shipping without these layers means you’re moving fast toward a breach. This checklist is the floor. Build on it.