Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
TL;DR Highlight
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.
Who Should Read
ML engineers seeking to reduce LLM inference costs while maintaining accuracy, particularly developers currently using self-consistency (multiple answer generation followed by majority voting) for tasks like solving math problems or generating code.
Core Mechanics
- Self-consistency suffers from significant waste due to repeatedly generating the same prefix. With code generation, approximately 17% of all prefix tokens are duplicated when k=32.
- DLE (Distinct Leaf Enumeration) is a deterministic method that views the space of possible sequences as a tree and explores only new, distinct leaves without redundancy.
- DLE functions as a drop-in replacement for existing truncation methods like top-p, min-p, and ε-sampling, operating without requiring additional model calls or separate evaluations.
- For a given number of sequences, DLE achieves higher probability mass coverage than stochastic self-consistency, and a positive correlation exists between coverage and accuracy.
- DLE is compatible with prefix caching in inference engines like SGLang/vLLM, allowing for KV cache sharing of prefixes, and achieving near-theoretical maximum cache hit rates.
- Early stopping prevents waste by terminating branches when 10 consecutive tokens match a previous sibling branch; over 85% of terminated branches would have yielded the same final answer.
Evidence
- "On Qwen2.5-0.5B-Instruct, DLE (with ε-sampling) achieves 44.05% on GSM8K maj@8 versus 40.94% for Self-consistency (with ε-sampling), a 3-9% improvement. On HumanEval pass@8, DLE achieves 52.44% versus 47.56% for Self-consistency, a 4-7% improvement."
How to Apply
- "If you’re currently implementing self-consistency with vLLM or SGLang, DLE can be integrated by replacing only the sampling loop while maintaining the same truncation sampler settings (top-p, min-p, ε-sampling). Combining it with SGLang’s RadixAttention or vLLM’s automatic prefix caching can further reduce latency."
Code Example
# DLE core logic pseudocode (based on ε-sampling)
# Step 1: greedy generation to create the first leaf and track branching points
def dle_decode(model, prompt, k, epsilon=0.05):
leaves = []
# Priority queue: (negative probability, prefix, next alternative token to explore)
branch_queue = [] # (score, prefix_tokens, alt_token)
# First greedy generation
prefix, branch_points = greedy_generate_with_branches(model, prompt, epsilon)
leaves.append(prefix)
# Add branch points to the queue based on probability (PROBFIRST)
for pos, token, prob in branch_points:
heapq.heappush(branch_queue, (-prob, prefix[:pos], token))
while len(leaves) < k and branch_queue:
neg_prob, branch_prefix, alt_token = heapq.heappop(branch_queue)
# Start a new prefix with the alternative token and greedy completion
new_prefix = branch_prefix + [alt_token]
# Early stopping: terminate if it overlaps with a previous sibling branch by 10+ tokens
if check_early_stop(new_prefix, leaves, n=10):
continue
new_leaf, new_branches = greedy_generate_with_branches(
model, new_prefix, epsilon,
kv_cache=get_shared_prefix_cache(new_prefix) # Reuse prefix cache
)
leaves.append(new_leaf)
for pos, token, prob in new_branches:
path_prob = abs(neg_prob) * prob # accumulated path probability
heapq.heappush(branch_queue, (-path_prob, new_leaf[:pos], token))
# Majority voting (uniform weight)
answers = [extract_answer(leaf) for leaf in leaves]
return Counter(answers).most_common(1)[0][0]
# Example usage (SGLang based)
# SGLang's RadixAttention automatically shares the prefix KV cache
import sglang as sgl
@sgl.function
def dle_with_sglang(s, question, leaves_prefixes):
s += sgl.system("You are a math solver.")
s += sgl.user(question)
# Cache hit until the shared prefix, new generation only after that
for branch_prefix in leaves_prefixes:
with s.fork() as branch:
branch += sgl.assistant(branch_prefix)
branch += sgl.gen("completion", max_tokens=512)Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Show HN: GoModel – an open-source AI gateway in Go
GoModel unifies access to OpenAI, Anthropic, Gemini, and other AI providers through a single, OpenAI-compatible API, offering a compiled-language alternative to LiteLLM.
Claude Token Counter, now with model comparisons
Original Abstract (Expand)
Self-consistency boosts inference-time performance by sampling multiple reasoning traces in parallel and voting. However, in constrained domains like math and code, this strategy is compute-inefficient because it samples with replacement, repeatedly revisiting the same high-probability prefixes and duplicate completions. We propose Distinct Leaf Enumeration (DLE), a deterministic decoding method that treats truncated sampling as traversal of a pruned decoding tree and systematically enumerates distinct leaves instead of sampling with replacement. This strategy improves inference efficiency in two ways. Algorithmically, it increases coverage of the truncated search space under a fixed budget by exploring previously unvisited high-probability branches. Systemically, it reuses shared prefixes and reduces redundant token generation. Empirically, DLE explores higher-quality reasoning traces than stochastic self-consistency, yielding better performance on math, coding, and general reasoning tasks.