How I use Claude Code: Separation of planning and execution
TL;DR Highlight
Before writing code with Claude Code, a research → plan → annotate → execute workflow dramatically reduces wasted effort and AI hallucinations.
Who Should Read
Developers actively using Claude Code or similar AI coding agents, and anyone looking to improve the reliability of LLM-assisted development workflows.
Core Mechanics
- The recommended workflow is: Research (gather context about the codebase) → Plan (define what to build and how) → Annotate (add code comments and docs to guide the AI) → Execute (let Claude Code write the code).
- Skipping the research and planning phases is the most common mistake — jumping straight to 'write this feature' leads to Claude making wrong assumptions about the codebase structure.
- Annotation is a key differentiator: adding detailed comments in the relevant files before asking Claude to write code acts as precise context injection, reducing hallucination significantly.
- Breaking tasks into smaller, well-scoped subtasks with clear success criteria produces far better results than broad open-ended requests.
- The workflow treats Claude Code as a junior dev who needs good briefing materials, not a senior dev who can figure everything out autonomously.
Evidence
- The author shared concrete before/after examples showing how annotated codebases led to significantly fewer revision cycles compared to un-annotated requests.
- HN commenters corroborated the research-first approach, with several sharing similar workflows they'd developed independently.
- One commenter noted this mirrors good engineering practice — 'YAGNI aside, writing the spec before the code is just good engineering' — and that AI just makes the cost of skipping this higher.
How to Apply
- Before starting a Claude Code session, spend 5–10 minutes reading the relevant files yourself and adding or updating comments that explain intent, constraints, and relationships.
- Use Claude in 'research mode' first — ask it to summarize the relevant parts of the codebase before asking it to write anything.
- Write a short plan (even just a few bullet points in a comment) describing what you want before triggering the execute phase.
- Scope tasks tightly: 'add pagination to the users list endpoint' beats 'improve the users API.'
Terminology
Claude CodeAnthropic's CLI-based AI coding agent that can read, write, and execute code in your local development environment.
Context injectionThe practice of strategically placing relevant information (code comments, docs, examples) into the files an LLM will read, to guide its outputs.
HallucinationWhen an LLM generates plausible-sounding but incorrect or fabricated information — in coding contexts, this often means wrong function signatures, non-existent APIs, or incorrect logic.