I spent way too long reading docs for every new tool I wanted to pick up. Cover-to-cover, “I’ll remember this later” type reading. I never remembered it later. Retained maybe 20% on a good day.
Then about 11–12 months ago I tried something stupid simple that completely changed how I learn new tools. I’ve been using it ever since, and honestly I don’t know why I didn’t think of it sooner.
The idea: stop asking the AI to write code for you. Feed it documentation and make it coach you instead.
The Problem with How We Learn
Every developer has been through this cycle:
Path A — Read the docs first. You spend a weekend reading documentation cover-to-cover. By Monday, you’ve forgotten 80% of it. The concepts that stuck are the easy ones you probably already knew. The hard stuff — the things you actually needed — evaporated.
Path B — Just start building and ask AI for help. You open Claude or ChatGPT and say “build me a REST API with authentication.” The AI spits out code. It works… sort of. You don’t understand half of it. When something breaks, you can’t debug it because you never learned the fundamentals. You’re copy-pasting from an AI instead of copy-pasting from Stack Overflow. Same problem, different source.
Path C — Buy a course. You drop $50–200 on a video course. You watch the first three modules. Life happens. You never finish it. And even if you did — the UI changed two weeks after the course was recorded. Half the screenshots don’t match anymore.
None of these paths work because they all separate learning from doing. You either learn without doing (docs, courses) or do without learning (raw AI prompting).
The sweet spot is both at the same time.
The Source-of-Truth Mentoring Method
The setup takes about 5 minutes. Here’s the entire process.
Step 1: Find Good Markdown Docs
Every major tool has documentation, guides, or READMEs on GitHub in .md format. Official docs, open-source learning repos, getting-started guides — whatever’s available.
Some examples:
- LangChain — getting-started guide + cookbook examples on GitHub, all Markdown
- Tailwind CSS — docs repo on GitHub
- Next.js — comprehensive docs in their GitHub repo
- Claude Code — Anthropic’s docs, plus open-source learning repos
The key requirement is that the source material is text-based (Markdown, plain text, or similar). No videos — AI can’t parse video content as context.
How do you know if a doc source is good enough? Simple criteria:
- It has step-by-step instructions, not just API reference
- It was updated within the last 6 months
- It covers concepts, not just syntax
- It’s structured with headings and sections (AI navigates structured docs much better)
If you’re not sure, just try it. It’s free. The worst that happens is the AI’s mentoring is mediocre, and you try a different source.
Step 2: Dump Them Into Your AI’s Context
Any AI tool with a decent context window works:
- Claude Desktop — create a Project, upload files to Project Knowledge
- Cursor — add files to your workspace
- ChatGPT — use file upload in the chat
- Claude Code — reference files via
CLAUDE.md
Just drag the .md files in. This becomes the AI’s “source of truth” — it now has something real to reference instead of relying on potentially outdated training data.
Step 3: The Mentor Prompt
This is the core of the method. Paste this prompt:
You are my senior mentor. I have provided documentation as context.I want to learn by doing. Give me ONE small practical task at a time.Wait for me to complete it. Check my work. Then tell me exactly whichconcept from the documentation I just learned. If I get stuck, give methe exact command. Do NOT lecture me. Just give me tasks.That’s it. No framework. No course. No 47-video playlist.
Why This Works (And Why Raw Prompting Doesn’t)
Anyone who’s coded with AI knows it acts like a confident but clueless junior dev half the time. It hallucinates APIs, invents CLI flags that don’t exist, and gets stuck in bug loops trying to fix its own mistakes.
The documentation anchors it. Instead of pulling answers from training data (which might be outdated or just wrong), it references the actual docs you gave it. The hallucination rate drops dramatically because the AI has a verified source to check against.
And for learning specifically — it flips the entire model. Instead of frontloading 50 pages of theory and hoping you remember it when you need it, you learn each concept at the exact moment you use it. Like pair programming with a senior dev who actually read the manual.
Three things make this better than a traditional course:
- You learn by doing, not watching. Every “lesson” is a task you execute yourself.
- The AI adapts to your pace. Stuck? It gives you the exact command. Breezing through? It jumps to harder tasks.
- The docs are always current. When a tool updates, grab the new docs. Your “course” is instantly updated. No waiting for some instructor to re-record Module 7.
Real Examples at Every Level
Beginner: Learning Tailwind CSS
I used this method to learn Tailwind CSS. Grabbed their docs from GitHub, told Claude: “Teach me Tailwind by styling a landing page. One utility class at a time.”
First task: “Add a blue background to this div using a Tailwind class.” Easy.
Then: “Center this text using flexbox utilities.” Did it.
Then: “Make this layout responsive with md: and lg: breakpoints.” This is where it got interesting — I used a weird combination of classes that technically worked but was fighting against Tailwind’s design philosophy. Instead of just fixing it, the mentor caught the anti-pattern, showed me the exact docs section explaining the mobile-first approach, and told me to try again.
Within an hour I went from writing inline style="color: red" to confidently using flex, grid, responsive breakpoints — and actually understanding why each class works, not just copy-pasting from Stack Overflow.
Intermediate: Learning LangChain + RAG
About 11 months ago I needed to build a RAG pipeline for a side project. Never touched LangChain before. Normally I would’ve spent a weekend reading their docs, watching YouTube tutorials, building half a thing, then forgetting everything by Monday.
Instead I grabbed their getting-started guide and a few cookbook examples from the LangChain GitHub — all Markdown. Dumped them into Claude Desktop.
Told it: “Teach me LangChain by building a RAG pipeline that can answer questions about my own documents. One task at a time.”
First task: “Install LangChain and create a script that loads a single PDF.” Easy.
Then: “Split that PDF into chunks using RecursiveCharacterTextSplitter with a chunk size of 500.” Did it. The mentor explained why 500 and not 1000, citing the exact section from the docs about chunk overlap strategies.
Then: “Add a vector store using FAISS and embed your chunks.” This is where I messed up — I picked the wrong embedding model and my retrieval results were garbage. Instead of just fixing it for me, the mentor pointed me to the specific docs section about choosing embeddings and told me to try again with a different model.
That one moment taught me more about embeddings than any “RAG tutorial in 10 minutes” video could. Because I felt the failure first, then understood why it mattered.
A couple hours later I had a working RAG app that could answer questions about my own Markdown files. And I’d internalized concepts like chunking strategy, embedding selection, retrieval vs generation — stuff I definitely would’ve glazed over reading docs linearly.
Prompt Variations for Different Goals
The base mentor prompt works great for learning from scratch. But you can adapt it for specific situations:
Debugging Mentor
You are my debugging mentor. I have provided the documentation as context.I'm stuck on this error: [paste error].Don't fix it for me. Instead:1. Ask me diagnostic questions one at a time.2. Guide me to find the root cause myself.3. After I fix it, tell me which concept from the docs I just learned.Refactoring Coach
You are my refactoring coach. I have provided the documentation as context.Review this code: [paste code or file path].Give me ONE refactoring task at a time based on best practices from the docs.Wait for me to make the change. Check my work. Then explain whichprinciple I just applied and where it's covered in the docs.”Explain Like I’m Switching Stacks” Mentor
You are my mentor. I'm an experienced [React/Python/whatever] developerlearning [new tool] for the first time. I have provided the docs as context.Map new concepts to things I already know from [my existing stack].Give me tasks that leverage my existing knowledge. Skip the basicsI'd already understand. Focus on what's genuinely different.Tools Compared: Where to Run This Method
| Tool | Context Method | Best For | Limitation |
|---|---|---|---|
| Claude Desktop | Project Knowledge (persistent) | Long learning sessions across days | File size limits per project |
| Cursor | Workspace files + @docs | Learning while actively coding | Context competes with your codebase |
| ChatGPT | File upload per chat | Quick one-off learning | Files don’t persist across chats |
| Claude Code | CLAUDE.md + file references | CLI-native developers | Terminal-only interface |
| Gemini | File upload / Google AI Studio | Large doc sets (1M+ token window) | Less coding-focused responses |
Claude Desktop’s Project Knowledge is the smoothest experience because files persist across conversations — you don’t re-upload every time. But the method works with any AI that lets you upload documents as context.
How to Evaluate If Your Doc Source Is Good Enough
Not all docs are created equal. Here’s a quick checklist:
Green flags:
- Has a “Getting Started” or “Quick Start” section
- Includes code examples alongside explanations
- Organized with clear headings and sections
- Updated within the last 6 months
- Available as Markdown on GitHub
Red flags:
- API reference only (no conceptual explanations)
- Last updated 2+ years ago
- Single massive file with no structure
- Only available as PDF (harder for AI to navigate, but still works)
Pro tip: If the official docs are thin, look for community guides. Many popular tools have awesome-lists, community tutorials, or open-source courses on GitHub — all in Markdown, all free. These often work better than official docs because they’re written from a learner’s perspective.
Beyond Solo Learning: Team Onboarding
I liked this approach so much that I started using it for onboarding new developers on my team. New person joins, they get a set of Markdown guides, drop them into Claude, and start building. It works significantly better than “here’s the Confluence page, good luck.”
The key insight: if you have internal documentation (architecture guides, coding conventions, onboarding checklists), converting them to Markdown and using this method turns static docs into an interactive onboarding experience. The new dev builds something real on day one instead of spending three days reading.
Getting Started
- Pick a tool you want to learn — anything with decent written docs
- Grab the Markdown docs from GitHub or the official site
- Create a project in Claude Desktop (or your AI of choice) and upload the docs
- Paste the mentor prompt and tell it what you want to build
- Follow the tasks — one at a time, checking your work as you go
The docs are free. The AI subscription is $20. The trick isn’t skipping documentation — it’s making the AI teach it to you through practice instead of expecting yourself to sit down and read it all.
I’ve been refining this approach for a while and open-sourced some Markdown guides specifically formatted for this method. Check out the ShipWithAI GitHub if you want to try them — including a complete Claude Code learning path with 64 modules designed to be fed into AI as a knowledge base.
Related: If you specifically want to learn Claude Code using this method, we have two dedicated guides:
- Turn 136 Lessons Into a Personal AI Mentor — for developers comfortable with the terminal
- Learn Claude Code Without Touching the Terminal — for anyone using Claude Desktop