Early Stopping for Large Reasoning Models via Confidence Dynamics
TL;DR Highlight
A method to save 25-50% of tokens by observing the pattern of changes in the model's confidence during inference and stopping unnecessary reasoning early.
Who Should Read
ML engineers who are deploying reasoning models like DeepSeek-R1 and Qwen3 into production and are struggling with high inference costs. Developers who want to reduce the inference cost of long chain-of-thought generation.
Core Mechanics
- Correct reasoning trajectories show a rapid increase in confidence early in generation and stabilize quickly, while incorrect trajectories exhibit unstable and fluctuating confidence patterns.
- Incorrect reasoning trajectories are more than twice as long as correct ones (12K vs 25K tokens) and account for a significant portion of the total computation.
- CoDE-Stop combines two signals: (1) a ramping confidence threshold to stop when confidence is sufficiently high, and (2) a degeneration score to detect unstable reasoning.
- The degeneration score gives higher weights (log weighting) to early reasoning steps, as early confidence patterns are more useful for distinguishing between correct and incorrect answers.
- Even as inference continues, the confidence of incorrect trajectories tends to increase, making it easy to misjudge based on late-stage confidence alone – therefore, early signals are more important.
- It requires no additional training and can be directly attached to existing models at inference time. It also has additive effects when used with prompting techniques like Chain-of-Draft.
Evidence
- "Qwen3-4B achieved a 25% reduction in token usage while maintaining accuracy across 4 benchmarks (8344 → 5956 tokens).\nQwen3-14B maintained 93.0% accuracy on MATH500 while reducing tokens by 49.1% (4878 → 2529 tokens).\nCoDE-Stop achieved a better accuracy-compute tradeoff than DEER (the most similar baseline) across all models and benchmarks. For example: Qwen3-4B AIME achieved similar accuracy with 13400 tokens (DEER) vs 12800 tokens (CoDE-Stop).\nOn GSM8K, Qwen3-4B achieved a compression rate of 52.4% (Vanilla 2306 → CoDE-Stop 1233 tokens) with the highest compression rate, maintaining accuracy at 94.8% → 94.6%."
How to Apply
- "If you are using reasoning models like Qwen3, DeepSeek-R1, and Nemotron on HuggingFace, you can wrap CoDE-Stop around the inference loop by importing the GitHub code. Start with max_new_tokens set to 32K, δ=0.55, and rmax=0.95 without any additional training.\nDuring inference, measure the confidence by generating intermediate answers each time a reasoning step delimiter token such as 'Wait' or '\\n\\n' appears, and force the final answer to be generated when the degeneration score exceeds the threshold τ.\nTo adjust the cost vs. accuracy tradeoff, simply change τ (the degeneration threshold). A smooth tradeoff curve was confirmed in the paper, so lowering τ will stop earlier and save tokens, while raising it will allow for longer inference and recover accuracy."
Code Example
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.
Related Resources
Original Abstract (Expand)
Large reasoning models rely on long chain-of-thought generation to solve complex problems, but extended reasoning often incurs substantial computational cost and can even degrade performance due to overthinking. A key challenge is determining when the model should stop reasoning and produce the final answer. In this work, we study the confidence of intermediate answers during reasoning and observe two characteristic behaviors: correct reasoning trajectories often reach high-confidence answers early, while incorrect rollouts tend to produce long, unproductive reasoning traces and exhibit less reliable confidence dynamics. Motivated by these observations, we propose CoDE-Stop (Confidence Dynamics Early Stop), an early stopping method that leverages the dynamics of intermediate answer confidence to decide when to terminate reasoning, requiring no additional training and easily integrating into existing models. We evaluate CoDE-Stop on diverse reasoning and science benchmarks across multiple models. Compared to prior early stopping methods, it achieves a more favorable accuracy-compute tradeoff and reduces total token usage by 25-50% compared to standard full-length reasoning. In addition, we provide analyses of confidence dynamics during reasoning, offering insights into how confidence changes in both correct and incorrect trajectories.