How Do AI Agents Spend Your Money? Analyzing and Predicting Token Consumption in Agentic Coding Tasks
TL;DR Highlight
AI coding agents consume over 1200x more tokens than standard chat, yet performance doesn’t improve with increased usage.
Who Should Read
Developers and engineering managers deploying AI coding agents (Cursor, Devin, OpenHands, etc.) or managing their costs. Those surprised by unexpectedly high agent API bills will find this particularly relevant.
Core Mechanics
- Agentic coding—autonomous code modification—consumes 3500x more tokens than simple code reasoning and 1200x more than code chat. Input tokens, not output, are the primary cost driver.
- Token usage varies up to 30x for the same task execution, making costs highly unpredictable.
- Higher token consumption doesn’t guarantee better success rates. Accuracy peaks in the mid-cost range and plateaus or declines at the highest costs.
- Analysis of expensive failures reveals a pattern of repeatedly viewing and modifying the same files—inefficient exploration that drives up costs without making progress.
- Token efficiency varies significantly between models. Kimi-K2 and Claude Sonnet-4.5 consume 1.5 million+ more tokens on average than GPT-5, a difference stemming from inherent model behavior, not task difficulty.
- Expert-assessed task difficulty (e.g., 'under 15 minutes') is a weak predictor of actual token consumption (Kendall τb = 0.32). Even 'easy' tasks can be expensive for agents.
- When models predict their own token usage, correlations max out at 0.39 and consistently underestimate actual costs, particularly for input tokens.
Evidence
- "Agentic coding averages 4,167,850 tokens / $1.857. Code Reasoning uses 1,190 tokens / $0.016, and Code Chat uses 3,390 tokens / $0.023—3500x and 1200x differences, respectively. The most expensive run of a problem costs 2x more than the cheapest, and up to 30x more. The most expensive problems consume approximately 7 million more tokens than the cheapest. Pearson correlation for model self-prediction peaks at 0.39 (Claude Sonnet-4.5 output tokens), with all models systematically underestimating actual usage. GPT-5 and GPT-5.2 used 1.5 million+ fewer tokens than Kimi-K2 on common successful tasks, and this efficiency gap persisted on common failures."
How to Apply
- When agent costs threaten to exceed budget, first run the agent in 'token estimation mode' to obtain a rough cost signal before actual execution. Use the prompt from the paper’s Appendix C to decompose the task and receive estimates in JSON format.
- If multiple models are available for the same coding task, choosing GPT-5 or GPT-5.2 instead of Kimi-K2 or Claude Sonnet-4.5 can save an average of 1.5 million+ tokens per task, especially in high-throughput pipelines.
- Repeatedly viewing/modifying the same file in failing agent runs signals cost explosion. Add a budget-aware policy that monitors the number of accesses to the same file and terminates execution if a threshold is exceeded.
Code Example
# Based on paper Appendix C - Prompt for estimating token cost before execution
# Core part of the system prompt for using the OpenHands agent in token estimation mode
SYSTEM_PROMPT = """
You are a TOKEN ESTIMATION agent, not a problem-solving agent.
Your ONLY goal is to estimate token costs, NOT to fix bugs.
Estimation Workflow:
1. Exploration: Explore relevant files and understand the task context.
2. Analysis: Consider solution approaches and estimate token costs per phase.
3. Testing: Run existing tests to understand complexity (do NOT modify files).
4. Output: Produce a JSON estimate and call finish.
Required JSON output format:
{
"predicted_input_tokens": <integer>,
"predicted_output_tokens": <integer>,
"predicted_total_tokens": <integer>,
"confidence": <float 0-1>,
"breakdown_by_phase": {
"repo_cloning": {"input_tokens": ..., "output_tokens": ...},
"initial_reading": {"input_tokens": ..., "output_tokens": ...},
"test_setup": {"input_tokens": ..., "output_tokens": ...},
"debugging": {"input_tokens": ..., "output_tokens": ...},
"coding_iterations": {"input_tokens": ..., "output_tokens": ...},
"verification": {"input_tokens": ..., "output_tokens": ...},
"review_cleanup": {"input_tokens": ..., "output_tokens": ...}
}
}
CRITICAL: Never write actual code fixes. Never modify source files.
"""
USER_PROMPT = """
I've uploaded a python repository at /workspace/{repo_name}.
You are a TOKEN ESTIMATION agent.
Estimate the token cost to fix the following issue:
<issue description>
{issue_text}
</issue description>
"""
# Example of actual application (using OpenAI API)
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o", # or another model
messages=[
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": USER_PROMPT.format(
repo_name="my-project",
issue_text="Fix a bug where X fails when Y happens"
)}
]
)
print(response.choices[0].message.content)Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Related Resources
Original Abstract (Expand)
The wide adoption of AI agents in complex human workflows is driving rapid growth in LLM token consumption. When agents are deployed on tasks that require a significant amount of tokens, three questions naturally arise: (1) Where do AI agents spend the tokens? (2) Which models are more token-efficient? and (3) Can agents predict their token usage before task execution? In this paper, we present the first systematic study of token consumption patterns in agentic coding tasks. We analyze trajectories from eight frontier LLMs on SWE-bench Verified and evaluate models' ability to predict their own token costs before task execution. We find that: (1) agentic tasks are uniquely expensive, consuming 1000x more tokens than code reasoning and code chat, with input tokens rather than output tokens driving the overall cost; (2) token usage is highly variable and inherently stochastic: runs on the same task can differ by up to 30x in total tokens, and higher token usage does not translate into higher accuracy; instead, accuracy often peaks at intermediate cost and saturates at higher costs; (3) models vary substantially in token efficiency: on the same tasks, Kimi-K2 and Claude-Sonnet-4.5, on average, consume over 1.5 million more tokens than GPT-5; (4) task difficulty rated by human experts only weakly aligns with actual token costs, revealing a fundamental gap between human-perceived complexity and the computational effort agents actually expend; and (5) frontier models fail to accurately predict their own token usage (with weak-to-moderate correlations, up to 0.39) and systematically underestimate real token costs. Our study offers new insights into the economics of AI agents and can inspire future research in this direction.