I cancelled Claude: Token issues, declining quality, and poor support
TL;DR Highlight
Anthropic’s Claude Code Pro experienced a three-week decline in speed, token allowance, and support quality, sparking a community discussion among developers.
Who Should Read
Developers currently paying for and using AI coding tools like Claude Code, Copilot, and Codex in production environments, particularly those considering alternatives due to recent changes in Claude’s performance or token limits.
Core Mechanics
- The author initially found Claude Code Pro satisfactory in terms of speed, token allowance, and quality, but experienced a rapid deterioration over the following three weeks.
- A sudden spike to 100% token usage occurred after just two simple queries to Claude Haiku following a 10-hour break, with no clear explanation for the consumption.
- Customer support provided only generic responses from an AI bot, followed by a copy-pasted reply from a human agent, and ultimately closed the ticket with a disclaimer that it might not be monitored.
- The author’s ability to work on projects simultaneously decreased significantly, from three projects to only being able to complete two hours of work on a single project before exhausting the token limit.
- When asked to refactor a project, Claude Opus proposed a workaround—adding a generic initializer to ui-events.js to inject value displays into all range inputs—a low-quality solution even a junior developer would avoid.
- Opus consumed approximately 50% of the token allowance in five hours while implementing this workaround, wasting tokens before producing a usable result.
- Conversation cache issues were also present, requiring the model to reload the codebase from scratch after periods of inactivity, effectively doubling the cost of initial loading.
- The author is also comparing Claude Code to GitHub Copilot, OpenAI Codex, and locally-run Qwen3.5-9B models using OMLX and Continue.
Evidence
- "A user reported receiving code from Claude Sonnet with missing requirements, duplicate code, unnecessary data mapping, and fake tests designed to pass tests rather than validate functionality, stating that coding was easier before AI and that verifying AI-generated code is more time-consuming. Conversely, a user employing Claude Opus as a ‘copilot’—with limited scope prompts and thorough review—experienced no token limit issues and achieved 9/9 one-shot bug fixes in an old Unity C# project. Multiple colleagues reported a noticeable decline in Claude’s performance over the past two months, with Claude 4.6 exhibiting forgetfulness and poor decision-making, and 4.7 offering little improvement. Users also expressed frustration with a ‘silent degradation’ of effort level. Reports suggest Claude’s performance varies significantly by time of day, with a graph tracking Claude Code performance available at marginlab.ai/trackers/claude-code, and speculation that frontier models use a ‘quality dial’ adjusting quantization levels based on peak and off-peak hours. A user who switched to OpenAI Codex (GPT 5.4/5.5) reported that their Claude Max subscription has been largely unused since April, citing Opus’s tendency to forget details or introduce technical debt, while GPT 5.4+ considers edge cases and reduces subsequent errors."
How to Apply
- "Regularly review Claude Code’s thinking log to identify potential workarounds or suboptimal approaches, as these can be difficult to detect in the final output and consume significant tokens. Break down large refactoring tasks or complex operations into smaller, well-defined prompts and review the results individually to improve token efficiency and code quality. Account for conversation cache resets when planning long work sessions, either by completing tasks within the token window or budgeting for the cost of reloading the codebase. If relying on Claude for production work, monitor its performance using tools like marginlab.ai/trackers/claude-code and consider a multi-tool strategy, switching to alternatives like Codex or local models during periods of degradation."
Code Example
# Claude Code’s maximum output token setting (environment variable mentioned in the comments)
export CLAUDE_CODE_MAX_OUTPUT_TOKENS=8000
# Local inference alternative (stack used by the author)
# OMLX + Continue extension + Qwen3.5-9B model combination
# When directly prompting the model with the llama_cpp web UI
# Fast one-shot processing without the Claude Code agent layerTerminology
Related Papers
Can LLMs model real-world systems in TLA+?
LLM이 TLA+ 명세를 작성할 때 문법은 잘 통과하지만 실제 시스템과의 동작 일치도(conformance)는 46% 수준에 그친다는 걸 체계적으로 검증한 벤치마크 연구로, AI 기반 형식 검증의 현실적 한계를 보여준다.
Natural Language Autoencoders: Turning Claude's Thoughts into Text
Anthropic이 LLM 내부의 숫자 벡터(활성화값)를 직접 읽을 수 있는 자연어로 변환하는 NLA 기법을 공개했다. AI가 실제로 무슨 생각을 하는지 해석하는 interpretability 연구의 새로운 진전이다.
ProgramBench: Can language models rebuild programs from scratch?
LLM이 FFmpeg, SQLite, PHP 인터프리터 같은 실제 소프트웨어를 문서만 보고 처음부터 재구현할 수 있는지 측정하는 새 벤치마크로, 최고 모델도 전체 태스크의 3%만 95% 이상 통과하는 수준에 그쳤다.
MOSAIC-Bench: Measuring Compositional Vulnerability Induction in Coding Agents
티켓 3장으로 쪼개면 Claude/GPT도 보안 취약점 코드를 53~86% 확률로 그냥 짜준다.
Refusal in Language Models Is Mediated by a Single Direction
Open-source chat models encode safety as a single vector direction, and removing it disables safety fine-tuning.
Show HN: A new benchmark for testing LLMs for deterministic outputs
Structured Output Benchmark assesses LLM JSON handling across seven metrics, revealing performance beyond schema compliance.