I reverse-engineered why Claude Code burns through your usage so fast. 7 bugs that stack on top of each other — and the worst one activates when Extra Usage kicks in
TL;DR Highlight
A Max 20x subscriber reverse-engineered the Claude Code CLI source and discovered 7 bugs that drain usage abnormally fast. The core issue is a 'death spiral' where switching to Extra Usage demotes cache TTL from 1 hour to 5 minutes, causing costs to spike 2.8x.
Who Should Read
Developers using Claude Code CLI, especially Max plan or Extra Usage subscribers. Essential reading for anyone who has noticed their usage draining faster than usual.
Core Mechanics
- Most severe bug: an internal function in cli.js detects Extra Usage status and automatically demotes the cache TTL requested from the server from 1 hour to 5 minutes. Taking a break longer than 5 minutes raises the per-turn cost from $0.22 to $0.61 (2.8x) on a 220K context.
- This demotion happens client-side only. The server accepts 1-hour TTL requests normally, and patching the function in cli.js to always return true causes the server to grant 1 hour.
- The 'death spiral' structure: other bugs rapidly exhaust plan usage → Extra Usage activates → cache demoted to 5 minutes → one bathroom break triggers a full rebuild → Extra Usage drains instantly → locked out until the 5-hour reset.
- The native installer binary includes a custom Bun runtime that corrupts the cache prefix on every request. Installing via `npm install` resolves this; `file $(which claude)` should return a symlink, not an ELF binary.
- Between v2.1.69 and v2.1.90 (28 days, 20 versions), certain attachment types were missing on session resume, causing a cache miss every time. Fixed in v2.1.91.
- The Autocompact feature had no circuit breaker, causing failed compactions to retry infinitely. Internal source comments recorded 50+ consecutive failures across 1,279 sessions. Fixed in v2.1.89.
- The client generates fake rate limit errors without making actual API calls when transcripts grow long. Responses show `model: synthetic` with 0 tokens. Not yet fixed.
- Server-side compaction silently removes tool results mid-session, breaking the cache. Cannot be patched client-side. Not yet fixed.
Evidence
- A Max 20x user on WSL noticed usage draining noticeably faster recently; switching from the native install to the npm version brought drain speed back to normal, and they shared this experience directly.
- As a counterpoint, some heavy Max plan users claimed their usage had not changed at all compared to months ago and asked why it wasn't happening to everyone. The OP explained that bugs 1+3+5 must all occur simultaneously for the worst case of exhausting a weekly quota within 2 hours.
- One user had tried the native installer when it first launched, ran into issues, and switched back to the npm version — and had not experienced any of the recent issues since, as shared in the comments.
- A link to a cache analysis tool was shared in the comments: `https://github.com/abhiyan-maitri/claude-usage-report` — allows you to check cache usage per prompt.
- There was significant criticism that the post was written with Claude. Alongside cynical remarks that 'this sub has become a place where bots post for bots,' criticism also emerged that the Claude Code team no longer carefully reviews Claude-generated code, which is why these bugs went unaddressed for 20 releases.
- Some took a charitable view, noting that Anthropic is already subsidizing costs as much as possible, so fixing these bugs would benefit Anthropic as well.
How to Apply
- If you installed Claude Code via the native installer, reinstall with npm right now. If `file $(which claude)` returns an ELF binary, you have the native version. Switch with `npm install -g @anthropic-ai/claude-code` to resolve the cache prefix corruption bug (#1).
- If you are on a version below v2.1.91, update immediately. The session-resume cache miss bug (#2) and the Autocompact infinite loop bug (#3) were fixed in v2.1.91 and v2.1.89 respectively.
- If you are concerned about cost spikes from Extra Usage, you can patch the cache TTL decision function in cli.js to always return the 1-hour value. Note that updates will overwrite the patch, so you will need to reapply it after each update. Technical details are in GitHub issue anthropics/claude-code#43566.
Code Example
# Check current installation method
file $(which claude)
# Result 'ELF 64-bit...' → native binary (has bugs)
# Result 'symbolic link...' → npm version (normal)
# Switch to npm
npm install -g @anthropic-ai/claude-code
# Check cache settings
cat ~/.claude.json | grep -A5 'cachedGrowthBookFeatures'Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.