TL;DR — Claude Code has 4 hook handler types (command, prompt, agent, http) and 21 lifecycle events. Most developers default to command hooks on PreToolUse. This decision guide helps you pick the right type for the right event, and tells you which 3 to implement first. Jump to the decision tree →
📊 What this guide gives you:
- A decision tree for choosing CLAUDE.md vs hook vs both
- Priority-ranked list of which 7 events to implement first
- Handler type comparison table (speed, reliability, codebase access)
- Exit code cheat sheet and hook debugging workflow
Two configs. Same goal: block a force push to main. Different reliability:
# Command hook (deterministic, <5ms)COMMAND=$(jq -r '.tool_input.command // empty' < /dev/stdin)if echo "$COMMAND" | grep -qE 'git push.*(--force|-f).*main'; then echo "BLOCKED: force push to main" >&2 exit 2fi{ "type": "prompt", "prompt": "Block this if it looks like a force push to a production branch"}The command hook is 5 lines of bash. It runs in under 5ms. It catches every git push --force main without exception.
The prompt hook calls an LLM. It takes 300-2000ms. It might decide --force-with-lease is safe enough to allow.
Both are “hooks.” Choosing the wrong type turns a guardrail into a suggestion. CLAUDE.md instructions achieve 70-90% compliance (Dotzlaw Consulting, 2026). Hooks achieve 100%, but only when you pick the right one.
This post is the decision framework I wish I’d had when I started writing hooks six months ago. For hook basics, see the Claude Code Hooks guide. For the big picture on why hooks matter, read Harness Engineering: The System Around AI Matters More Than AI.
What are the 4 Claude Code hook handler types?
Claude Code hooks come in 4 types: command (shell scripts), prompt (LLM judgment), agent (multi-turn verification with codebase access), and http (webhooks to external services). Each trades speed for intelligence differently. Pick the wrong type and your 100% guardrail drops to a probabilistic suggestion (Dotzlaw Consulting, 2026).
| Handler | Speed | Deterministic? | Codebase Access? | Best For |
|---|---|---|---|---|
| command | <5ms | Yes | No (stdin only) | Guardrails, formatting, logging |
| prompt | 300-2000ms | No | No | Nuanced decisions on Stop |
| agent | 2-10s | No | Yes (full tools) | Deep verification, architecture |
| http | 50-500ms | Yes (your server) | No | Team policies, centralized audit |
Command hooks are shell scripts. They read JSON from stdin, run fast, and return deterministic results. Use them for anything you can express as a string match, path check, or regex.
Prompt hooks call an LLM to make a judgment call. They’re slower and non-deterministic. Only use them when the decision genuinely requires reasoning, like evaluating whether a subagent’s output meets quality standards on SubagentStop.
Agent hooks are the heaviest option. They spawn a full Claude Code session that can read files, search code, and run tools. Reserve them for verification tasks that need codebase context, like checking a refactor didn’t break module boundaries before Stop.
HTTP hooks POST to your server. They’re useful for centralized team policies and audit logging. They run async by default, so they don’t block the agent.
The critical rule: never use prompt-based hooks for safety boundaries. Prompt hooks involve LLM judgment, and LLMs can be wrong. Safety boundaries need deterministic command hooks.
Key insight: Claude Code’s 4 hook handler types trade speed for intelligence. Command hooks run in <5ms with 100% deterministic enforcement. Prompt hooks add LLM judgment at 300-2000ms but introduce non-determinism. The critical rule: never use prompt-based hooks for safety boundaries (Dotzlaw Consulting, 2026).
When should you use CLAUDE.md vs a hook vs both?
Use CLAUDE.md for conventions the agent should follow: naming, style, architecture preferences. Use hooks for rules the agent must never break: no force pushes, no edits to secrets, always format on save. Use both when you want the agent to understand WHY a rule exists while the hook enforces the WHAT. Context files alone cap your improvement at ~4% (ETH Zurich study, full breakdown in the pillar post).
Here’s the decision tree:
Is this a HARD constraint (must NEVER be violated)?├── YES → Can you test it with a string/path/regex check?│ ├── YES → Command hook (PreToolUse)│ └── NO → Does it need codebase context?│ ├── YES → Agent hook│ └── NO → Prompt hook or HTTP hook└── NO → Is it a preference or convention? ├── YES → CLAUDE.md (~70-90% compliance) └── NO → Is it a repeatable workflow? ├── YES → Skill or .claude/commands/ └── NO → You probably don't need itThe key boundary: CLAUDE.md is advice. A hook is enforcement. HumanLayer keeps their CLAUDE.md under 60 lines for this reason. Fewer instructions, more hooks. The shorter your CLAUDE.md, the more likely the agent follows each instruction. The longer it gets, the more diluted each rule becomes in 200K tokens of context.
When should you use both? When the constraint is structural (hook enforces it) but the agent also benefits from understanding the reasoning. Example:
- Hook: PreToolUse blocks
git push --forceto main - CLAUDE.md: “We use
--force-with-leaseinstead of--forcebecause a force push overwrote a teammate’s commits in March 2026”
The hook prevents the bad action. The CLAUDE.md helps the agent choose the right alternative.
For more on writing effective instruction files, see Why CLAUDE.md Is the Most Important File in Your Project.
Key insight: CLAUDE.md achieves ~70-90% compliance because it competes with 200K tokens of context for the model’s attention. A PreToolUse command hook achieves 100% compliance because it runs outside the LLM’s reasoning chain. Use CLAUDE.md to explain WHY. Use hooks to enforce WHAT (Dotzlaw Consulting, 2026).
Which hook events should you implement first?
Start with 3 events in this order: (1) PreToolUse for security guardrails, (2) PostToolUse for auto-formatting and logging, (3) Stop for completion verification. In my experience, these 3 cover most production use cases. Add SessionStart and SubagentStop only when you need environment setup or multi-agent quality gates.
| Priority | Event | Handler | What It Does | Setup Time |
|---|---|---|---|---|
| 1st | PreToolUse | command | Block dangerous actions | 15 min |
| 2nd | PostToolUse | command | Auto-format, log actions | 20 min |
| 3rd | Stop | agent | Verify work before done | 30 min |
| 4th | SessionStart | command | Load env vars, context | 10 min |
| 5th | SubagentStop | prompt | Validate subagent output | 20 min |
| 6th | PermissionRequest | command | Auto-approve safe patterns | 15 min |
| 7th | PreCompact | command | Preserve context on compact | 15 min |
For the full list of 21 events, see the official hooks reference.
Your first hook: a PreToolUse command hook that blocks force pushes to protected branches. Copy-paste ready:
#!/bin/bash# Blocks git push --force and -f to main/master/production
COMMAND=$(jq -r '.tool_input.command // empty' < /dev/stdin)
if echo "$COMMAND" | grep -qE 'git push.*(--force|-f)' && \ echo "$COMMAND" | grep -qE '(main|master|production)'; then echo "BLOCKED: force push to protected branch" >&2 exit 2fi
exit 0Register it in .claude/settings.json:
{ "hooks": { "PreToolUse": [ { "matcher": "Bash", "hooks": [ { "type": "command", "command": "bash .claude/hooks/block-force-push.sh" } ] } ] }}For teams with SOC2 or compliance requirements, priorities 1 and 2 together (PreToolUse + PostToolUse) create the audit trail your compliance team needs. Every action logged, dangerous actions blocked before execution.
For more PreToolUse security patterns, see Stop npm Supply Chain Attacks with Claude Code Hooks.
Key insight: Verification feedback loops are what Boris Cherny considers the single most important factor for agent quality (full context in the pillar post). PostToolUse hooks and Stop hooks are that feedback loop, built into the agent lifecycle. They run automatically, not at the model’s discretion.
How do you handle multiple hooks on the same event?
Hooks on the same event run in definition order. For PreToolUse, the strictest decision wins: deny beats defer, defer beats ask, ask beats allow. If any hook denies, the action is blocked regardless of what other hooks return. Chain hooks from fastest to slowest to minimize latency on allowed actions (Claude Code docs).
A 3-hook PreToolUse chain in practice:
{ "hooks": { "PreToolUse": [ { "matcher": "Bash", "hooks": [ { "type": "command", "command": "bash .claude/hooks/block-force-push.sh" }, { "type": "command", "command": "bash .claude/hooks/validate-paths.sh" }, { "type": "command", "command": "bash .claude/hooks/log-action.sh" } ] } ] }}Put security blocks first (fastest, most critical), validation second, logging last. If hook 1 denies, hooks 2 and 3 still run, but their decisions can’t override the deny.
The decision precedence hierarchy:
deny → Action blocked. Feedback sent to model.defer → Action paused (headless mode). External UI resumes.ask → User prompted for confirmation.allow → Action proceeds. Skips built-in permission check.(none) → Default behavior. Built-in permission check runs.Watch for slow hooks. A command hook calling an external API blocks the entire agent loop until it returns or times out. If you need external validation, use the http handler type. HTTP hooks run async and won’t stall your session.
For the full event system breakdown, see Claude Code Has 17 Hook Events Now.
Key insight: Claude Code hooks use a strict decision precedence on the same event: deny > defer > ask > allow. If any hook in the chain returns deny, the action is blocked regardless of what other hooks decide. This means you can safely add logging hooks after security hooks without weakening enforcement (Claude Code docs).
Get weekly Claude Code tips — One practical tip per week. No fluff, no spam. Subscribe to AI Developer Weekly →
What are the most common hook mistakes (and how do you debug them)?
Three mistakes account for most “my hook doesn’t work” reports, including GitHub issue #6305: (1) wrong exit code, where exit 1 is a silent error but exit 2 is a block; (2) hook path typo, where the hook silently doesn’t run; (3) forgetting to read stdin, so the hook gets zero context about the tool call.
Exit code cheat sheet
| Exit Code | Meaning | Model Sees Feedback? |
|---|---|---|
| 0 | Success (parse JSON from stdout) | Yes, if JSON provided |
| 2 | Block action (stderr becomes feedback) | Yes |
| Any other | Silent error (logged in verbose only) | No |
The exit 1 vs exit 2 distinction is the #1 gotcha. Exit 1 means “my hook crashed.” Claude Code logs it quietly and continues. Exit 2 means “I’m deliberately blocking this action.” Claude Code stops the tool call and sends your stderr message back to the model.
Debug workflow
Test any hook manually by piping JSON to it:
echo '{"tool_name":"Bash","tool_input":{"command":"git push --force main"}}' \ | bash .claude/hooks/block-force-push.shecho "Exit code: $?"If the hook doesn’t run at all, check this list:
- Path correct? Command path is relative to project root, not the hooks directory
- Matcher correct?
"matcher": "Bash"matches the tool name, not the command content - Settings level? Project (
.claude/settings.json) overrides user (~/.claude/settings.json) - File executable? Run
chmod +x .claude/hooks/your-hook.sh - JSON valid? A syntax error in settings.json silently disables all hooks
For a real-world story of misconfigured hooks and a deleted file, see I Set Up 3 Layers of Defense in Claude Code. It Deleted My File Anyway.
Key insight: 81% of AI agents are in operation, yet only 14.4% have full security approval (Authority Partners, 2026). The gap between “we have guardrails” and “our guardrails actually work” often comes down to exit codes, path typos, and untested configurations.
Try it now:
- Copy the force-push blocker script into
.claude/hooks/block-force-push.sh- Register it in
.claude/settings.jsonusing the JSON config above- Make it executable:
chmod +x .claude/hooks/block-force-push.sh- Test it:
echo '{"tool_name":"Bash","tool_input":{"command":"git push --force main"}}' | bash .claude/hooks/block-force-push.sh- Verify exit code 2 (blocked). You now have one production-ready guardrail.
These hooks are Layer 4 of a production harness. For the full 5-layer blueprint, see 5 Layers of a Production-Ready Claude Code Harness.
FAQ
What are the 4 Claude Code hook handler types?
Command (shell scripts, <5ms, deterministic), prompt (LLM judgment, 300-2000ms), agent (multi-turn verification with codebase access, 2-10s), and http (webhooks, 50-500ms). Use command hooks for guardrails and formatting. Use prompt or agent hooks for nuanced decisions that require reasoning. Use http hooks for team-wide policies and centralized logging.
Should I use CLAUDE.md or a hook for security rules?
Hooks. CLAUDE.md instructions achieve 70-90% compliance because they compete with 200K tokens of context for the model’s attention. A PreToolUse command hook achieves 100% compliance because it runs outside the LLM’s reasoning chain. Use CLAUDE.md to explain WHY a rule exists. Use hooks to enforce WHAT must happen.
What is the difference between PreToolUse and PostToolUse hooks?
PreToolUse runs BEFORE a tool executes and can block it (exit code 2) or modify its input. PostToolUse runs AFTER execution and cannot undo the action, but it can auto-format code, log what happened, or inject feedback that Claude sees on its next turn. PreToolUse for prevention, PostToolUse for reaction.
Can Claude Code hooks run in headless mode?
Yes. All hook types work in headless mode (claude -p). PreToolUse hooks can return permissionDecision: "defer" to pause execution for external UI collection, then resume with claude -p --resume <session-id>. This makes hooks fully compatible with CI/CD pipelines and SDK-based workflows.
Build your harness, not just your prompts. Hooks are one layer. The full system includes memory, tools, permissions, and observability. Start the Claude Code Mastery course to learn all five.
What to Read Next
- 5 Layers of a Production-Ready Claude Code Harness — Hooks are Layer 4. This post covers the complete blueprint for all 5 layers, with file templates and a setup checklist.
- Harness Engineering: The System Around AI Matters More Than AI — The pillar post explaining why the system around your AI agent matters more than the model itself.
- AGENTS.md Is a Failure Log, Not an Instruction File — The failure-first method for building your CLAUDE.md, where each line maps to a real agent failure you’ve prevented.