Found 3 instructions in Anthropic's docs that dramatically reduce Claude's hallucination. Most people don't know they exist.
TL;DR Highlight
Three hallucination-reducing system prompts discovered in Anthropic's official docs — installable as a research-mode command for Claude Code.
Who Should Read
Users relying on Claude for research and work where accuracy matters.
Core Mechanics
- Three system prompts found buried in Anthropic's 'Reduce Hallucinations' documentation.
- Significantly reduces confident-but-unsourced answers.
- Installable via GitHub repo (assafkip/research-mode) as a Claude Code command.
Evidence
- Three system prompts extracted from Anthropic's official 'Reduce Hallucinations' docs.
- Verified to significantly reduce confident-but-unsourced responses.
- Installable from GitHub repo (assafkip/research-mode).
How to Apply
- Install from github.com/assafkip/research-mode and apply at the start of research sessions.
- Either input at the start of each conversation or add to Claude settings preferences.
Terminology
HallucinationWhen an AI model confidently generates false information as if it were fact. Examples: citing non-existent papers, generating incorrect API names.
System PromptA special instruction given to LLMs like Claude or GPT before conversation begins, specifying role, rules, and behavioral guidelines.
Chain-of-ThoughtA prompting technique that forces the model to go through step-by-step reasoning rather than jumping to a final answer. Effective at improving accuracy.