Show HN: CodeBurn – Analyze Claude Code token usage by task
TL;DR Highlight
An open-source tool that visualizes where and how much tokens are consumed in AI coding tools with a terminal dashboard, operating by reading only local session files without the need for separate API keys or proxies.
Who Should Read
Developers who use AI coding tools such as Claude Code, Cursor, and Codex daily and want to understand their costs and identify tasks that consume a lot of tokens.
Core Mechanics
- CodeBurn shows token usage for major AI coding tools like Claude Code, OpenAI Codex, Cursor, OpenCode, Pi, and GitHub Copilot, categorized by task type, tool, model, MCP server, and project.
- Its operation is unique: it doesn't require any wrappers, proxies, or API keys, and directly analyzes session files stored on disk by each tool. Claude Code uses paths like ~/.claude/projects/, and Codex uses ~/.codex/sessions/.
- It tracks the 'one-shot success rate' for each task type, allowing you to see which tasks are completed on the first attempt and which ones waste tokens with edit/test/fix retries.
- It's an interactive TUI (Terminal UI) dashboard that runs in the terminal, built on Ink (a terminal React framework), and supports gradient charts, responsive panels, and keyboard navigation.
- It supports various time ranges such as today, 7 days, 30 days, monthly, and all time, and also features CSV/JSON export, a macOS SwiftBar menu bar widget, and auto-refresh functionality.
- Price information is automatically cached from LiteLLM, allowing you to calculate costs for all supported models without separate configuration.
- Installation is as simple as `npm install -g codeburn`, and you can run it directly with `npx codeburn` if you have Node.js 20+. Cursor/OpenCode automatically install better-sqlite3 to read SQLite files.
- The creator revealed that they were spending about $1,400 per week on Claude Code and wanted to see where the tokens were being consumed.
Evidence
- Regarding the creator's mention of spending $1,400 per week on Claude Code, one comment stated that a $200/month plan was sufficient to run 5 agents simultaneously on a 300k LoC codebase without ever hitting the rate limit, suggesting that a flat-rate plan eliminates cost concerns compared to pay-as-you-go.
- Claudoscope (github.com/cordwainersmith/Claudoscope) and ClaudeRank (clauderank.com) were mentioned in the comments as tools with similar purposes, and commenters expressed a preference for CodeBurn's approach.
- A compatibility issue with Cursor Agent was reported, where the tool fails to recognize data if Cursor stores it in the ~/.cursor path.
- An interesting fact was shared in the comments about the terminal UI being built with Ink (React for terminals), noting that 'Claude Code itself is also made with Ink'.
- A comment suggested adding a feature to detect cost inefficiencies and propose improvements, and the creator responded positively.
How to Apply
- If you use Claude Code or Cursor daily and your bill at the end of the month is higher than expected, you can immediately run `npx codeburn` to see which projects and task types are consuming the most tokens.
- By identifying task types with low one-shot success rates, you can improve the prompts or task decomposition methods to reduce token waste from retries.
- If you're deploying AI coding tools across a team and need to justify costs, you can extract data with `codeburn report --format json` to create team- and project-based cost reports.
- If you're on macOS and want to continuously monitor token usage, you can connect it to a SwiftBar menu bar widget to view the status without opening a separate dashboard.
Code Example
# Installation
npm install -g codeburn
# Run directly without installation
npx codeburn
# Basic interactive dashboard (last 7 days)
codeburn
# Today's usage
codeburn today
# This month's usage
codeburn month
# Recent 30-day rolling window
codeburn report -p 30days
# All time
codeburn report -p all
# Output in JSON format
codeburn report --format json
# Auto-refresh every 60 seconds
codeburn report --refresh 60
# One-line summary (today + this month)
codeburn status
# Export to CSV (today/7 days/30 days)
codeburn export
# Export to JSON
codeburn export -f jsonTerminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.