Skip to content

Tips & Tricks

import { Aside, Card, CardGrid } from ‘@astrojs/starlight/components’;

Run `/init` immediately in a new project. A well-crafted CLAUDE.md with the 6 essential sections (Overview, Architecture, Conventions, Commands, Constraints, Context) saves more time than any other single action. Never let Claude read `.env` directly. Create `.env.example` with placeholders and tell Claude: "Read .env.example and generate config using process.env." This simple habit prevents 100% of accidental secret exposure. Build the habit: **Read-only commands → auto-approve. Write commands → read first. Destructive commands → always deny.** After a week, you will instinctively know what to approve and what to check. Structure every non-trivial prompt with **C**ontext, **R**eference, **E**xpectation, **F**ormat. Teams using CREF reduced code review rejection from 35% to 12%. Never say "read all files in src/." Instead: (1) Ask for directory structure, (2) Ask for function signatures, (3) Only then read the specific file you need. This 3-layer approach uses 60% less context. Compact at **50-70%** context usage, not at 90%. Early compaction preserves more useful context. Think of it like defragmenting — better done proactively than in an emergency. Before any auto-coding session, specify: ALLOWED files, FORBIDDEN files, stop conditions, and a verification command. Without boundaries, Claude may modify files you did not intend. If the same approach fails 3 times with similar results, **stop immediately**. Ask: "What are we assuming that might be wrong?" This one rule saves an average of $8 and 45 minutes per stuck session. On legacy codebases, spend the first session ONLY reading — no modifications. Use the 3-layer strategy: Structure (the map) → Patterns (the routes) → Details (the streets). One team mapped a 50,000-line Java codebase in 5 days vs. estimated 3 weeks. During incremental refactoring, commit after every single logical change. An 800-line legacy function refactored via 33 small commits over 4 weeks had zero production incidents. A big-bang rewrite of the same function broke payment, shipping, and audit logging. A Vietnamese startup reduced Claude costs from $1,200 to $380/month simply by matching models to task complexity. Their discovery: "Haiku could do 80% of what we were using Opus for." Create reusable templates for your most common tasks (`/screen`, `/api-call`, `/debug`). Teams report 40-60% faster task completion once their template library reaches 5-10 recipes.
#TechniqueHowSavings
1Selective readingAsk for signatures/types first, full files later-60%
2Filtered outputUse head -20, tail -50, grep -A 5-80%
3Strategic /compactCompact at 30%, 60%, 80% markersResets to ~50%
4Prompt brevity”Fix bug in saveUser” not a 500-word essay-50%
5File chunking”Read lines 1-100 of X” for large filesRead only needed parts
6Avoid re-readsAsk “Do you still have auth.ts in context?”-100% duplicate cost
Terminal window
# Check cost every 20-30 minutes
/cost
# Sample output:
# Input: 45,231 tokens ($0.14)
# Output: 12,847 tokens ($0.19)
# Total: 58,078 tokens ($0.33)
# If context > 60%, compact proactively
/compact

Large Android/KMP projects (100K+ lines) require special strategies to stay within context limits.

Layer 1: Structure (The Map)
├── "Show me the project directory structure"
├── "What are the main modules?"
└── "Where are the entry points?"
Layer 2: Patterns (The Routes)
├── "What architecture pattern is used?"
├── "Show me the dependency graph between modules"
└── "How does data flow from API to UI?"
Layer 3: Details (The Streets)
├── "Read src/auth/LoginViewModel.kt"
├── "Trace the payment flow from button click to API call"
└── "What happens when the token expires?"
Include: Clean Architecture layers, MVVM conventions, Compose vs XML preference, module boundaries, Gradle commands (`./gradlew assembleDebug`, `./gradlew test`), and ProGuard rules to never modify. For Kotlin Multiplatform, tell Claude which code is shared (`commonMain`) vs platform-specific (`androidMain`, `iosMain`). Without this, Claude may write platform-specific code in shared modules.
Phase 1: Map (1 session)
"Identify all files using deprecated API X"
Phase 2: Plan (1 session)
"Create a migration plan — which files first, which have dependencies"
Phase 3: Execute (N sessions, batched)
"Migrate files 1-10 from the plan. Test after each file."
/compact
"Migrate files 11-20..."

Combining Claude with n8n & External Tools

Section titled “Combining Claude with n8n & External Tools”
PatternUse CaseExample
Sequential PipelineMulti-step transformationEmail → Extract data → Generate report
Fan-Out/Fan-InParallel independent tasksAnalyze 10 PRs simultaneously
Classification RouterDifferent handling per typeBug vs feature vs question triage
Human-in-the-LoopNeed approvalCode review with human sign-off
Batch ProcessingMany itemsProcess 200+ customer reviews
In n8n Execute Command nodes, use `/usr/local/bin/claude` instead of just `claude`. The n8n process may not have your shell PATH. Always set timeout in Execute Command nodes to 60000ms or more. Claude Code operations can take 30-60 seconds, especially for code generation tasks. For complex prompts with quotes and special characters, build the prompt string in a Set node first, then pass it to the Execute Command. This prevents quote-breaking issues.
  • Marketing agency: 50+ daily email briefs auto-processed — 2 hours reduced to 5 minutes
  • Customer reviews: 200+ reviews analyzed with batch + sequential + router — 4 hours to 30 minutes
  • Code deployment: PR merged → auto-generate changelog → notify Slack → update docs

Watch for these signs that Claude may be hallucinating:

Red FlagExampleWhat to Do
Generic descriptive namesawesome-validator, fast-json-loaderVerify package exists: npm view <name>
No source links provided”Use the X library” with no URLAsk: “Link me to the GitHub repo”
Version number claims”Added in v3.2.1”Check official changelog
Suspiciously perfect matchExactly the API you need, no trade-offsToo good? Verify independently
Mixed syntax stylesReact class + hooks in same componentAsk Claude to pick one approach
Confident but wrong”This is the standard way to…”Confidence does not equal accuracy
What to VerifyCommandSource
npm package existsnpm view <package>npmjs.com
Python package existspip show <package>pypi.org
CLI flag existscommand --helpOfficial docs
File path existsls -la <path>Your system
API endpoint workscurl -I <url>Live server

When Claude gets stuck in a loop or produces incorrect output:

StepActionPrompt
1. RedirectChange approach”Stop. Try a completely different approach.”
2. InformProvide missing context”You might be missing: the API returns XML, not JSON.”
3. DecomposeBreak into smaller parts”Forget the full solution. Just solve the auth part first.”
4. RefreshClear stale context/compact
5. ResetStart fresh/clear
6. HumanTake overYou write the code manually
Claude suggests `npm install super-fast-orm`. You install it. It does not exist. **Prevention**: Always `npm view ` before installing. Claude uses `response.auto_confirm` in payment integration. The field does not exist in the API. One fintech team lost 2 days debugging this in staging. **Prevention**: Cross-reference every API field with official docs. Claude uses `componentWillMount` in React. It was deprecated years ago. **Prevention**: If Claude suggests any lifecycle method, verify it against current React docs. Claude says "PostgreSQL supports `UPSERT` natively since version 9.0." Actually, it was version 9.5. Small detail, big consequences if you target older versions. **Prevention**: Always verify version-specific claims.

Add this section to your CLAUDE.md to reduce hallucinations:

## Verification Rules
- NEVER suggest packages without confirming they exist on npm/pypi
- ALWAYS provide GitHub or docs URLs for third-party libraries
- If unsure about an API, say "needs verification" instead of guessing
- When suggesting version-specific features, cite the changelog