1M context is now generally available for Opus 4.6 and Sonnet 4.6
TL;DR Highlight
Anthropic rolled out 1M token context windows for Opus 4.6 and Sonnet 4.6 — this changes what's practical for long-context tasks.
Who Should Read
Developers building applications with large documents, long conversation histories, or codebases that need to be processed as a single context, and ML engineers benchmarking long-context performance.
Core Mechanics
- Claude Opus 4.6 and Sonnet 4.6 now support 1 million token context windows — enough for entire medium-sized codebases, very long books, or months of conversation history.
- 1M tokens is approximately 750,000 words or roughly 3,000 pages of text.
- This makes certain use cases that previously required RAG or chunking feasible as direct in-context tasks: analyzing a full codebase, processing large legal document sets, or maintaining very long agent memory.
- The key question is whether the model's attention quality degrades in the middle of a 1M token context (the 'lost in the middle' problem) — early reports suggest Anthropic has made improvements here.
- Pricing at this scale becomes a significant consideration: 1M tokens of input is expensive relative to a well-tuned RAG retrieval that only brings in the relevant 10k tokens.
Evidence
- Anthropic announced the 1M context expansion with API availability, confirmed through the API documentation.
- HN commenters ran their own tests — feeding in full codebases and asking questions across the entire codebase. Results were generally positive for code navigation tasks.
- Some found that retrieval quality degrades for context items in the 'middle' of a very long context, consistent with the known 'lost in the middle' problem in long-context models.
- Cost comparisons showed that for high-recall tasks, 1M context could actually be cheaper than complex RAG pipelines with re-retrieval and reranking.
How to Apply
- For codebase analysis and navigation tasks, try loading the entire relevant codebase into a single 1M context request before investing in RAG-based code search.
- For long-document processing (legal, research, financial), test whether direct 1M context gives better answers than chunked RAG — quality may outweigh cost for high-value queries.
- Structure prompts for 1M contexts carefully: put the most important content at the beginning or end, not the middle — attention tends to be strongest there.
- Monitor costs carefully: 1M token inputs at current pricing can be expensive at scale — model a full-context approach vs. RAG cost/quality tradeoff before committing to architecture.
Terminology
Context windowThe maximum number of tokens an LLM can process in a single call — determining how much text you can give it at once.
Lost in the middleA known issue where LLMs pay less attention to content in the middle of long contexts compared to the beginning and end.
RAGRetrieval-Augmented Generation — retrieving only relevant document chunks rather than loading the full corpus, to work around context window limits.
TokenThe basic unit of text LLMs process — roughly 0.75 words in English. 1M tokens is approximately 750,000 words.