Agentic Forecasting using Sequential Bayesian Updating of Linguistic Beliefs
TL;DR Highlight
Bayesian Linguistic Belief State surpasses web search performance by a margin exceeding search’s own gains in predictive systems.
Who Should Read
Backend developers building LLM-powered prediction systems or Research Agents. Engineers frustrated by simple context accumulation in RAG pipelines and seeking more structured state management.
Core Mechanics
- The core idea is a 'Bayesian Linguistic Belief State': with each search step, the LLM updates a JSON object containing probability estimates + supporting evidence summaries + open questions, differing from traditional methods of endlessly stacking text into context.
- Removing the Belief State degrades performance (Brier Index drops by 5.1), a greater impact than removing web search (3.4 drop) – structured state management is more important than search itself.
- Employing Multi-Trial Aggregation—running 5 independent trials in parallel and averaging—reduces the high variance of LLM predictions. However, this averaging is theoretically beneficial for Brier Score (quadratic loss) but not for Brier Index (linear loss).
- Calibration is performed using Hierarchical Platt Scaling (correction method with source-specific offsets) – outperforms global Platt Scaling across all settings, especially preventing over-shrinking for sources with extreme base rates (e.g., Wikipedia vaccine questions).
- Sequential processing (searching once and iteratively updating belief) yields a 7.7 higher Brier Index than Batch processing (parallel search followed by inference)—the single largest performance differentiator.
- Model ensembles are ineffective between models of the same architecture. Diversity (Jensen-Shannon Divergence 0.006~0.014) is too low when using the same prompts/tools/search results, leading to performance degradation.
Evidence
- "Achieving a total Brier Index of 83.5 with BLF across 400 ForecastBench backtest questions – significantly outperforming GPT-5 (79.9), Cassi (79.5), Grok 4.20 (78.8), and Foresight-32B (79.2) with p<0.001 significance."
How to Apply
- Instead of continuously appending search results to context in a RAG agent, structure your prompts to have the LLM update a JSON belief state of the form {probability, confidence, evidence_for, evidence_against, open_questions} after each search. This structured belief also informs subsequent search queries.
- Applying a Multi-Trial pattern—running the same question independently multiple times (K=5 recommended) and averaging—guarantees improvement for evaluation metrics based on quadratic loss (Brier Score) due to Jensen's inequality. However, it offers no theoretical benefit for linear loss-based metrics, where median aggregation is slightly preferable.
- For systems requiring different calibration for source types (e.g., a mix of market data, economic indicators, and Wikipedia), applying Hierarchical Platt Scaling with source-specific intercept offsets instead of global Platt Scaling can improve Brier Index by +3.5 from a zero-shot baseline.
Code Example
# BLF Bayesian Linguistic Belief State structure example
# Configure the LLM to generate this JSON along with each Tool call
belief_state_schema = {
"probability": 0.5, # P(outcome=1 | evidence so far)
"confidence": "low", # low / medium / high
"evidence_for": [], # List of summaries supporting the outcome
"evidence_against": [], # List of summaries refuting the outcome
"open_questions": [], # Questions to search for next
"update_reasoning": "" # Why the probability changed in this step
}
# Agent loop pseudocode
def blf_agent(question: str, cutoff_date: str, max_steps: int = 10):
belief = {"probability": 0.5, "confidence": "low",
"evidence_for": [], "evidence_against": [],
"open_questions": [question], "update_reasoning": "Prior"}
history = [{"role": "user", "content": question}]
for step in range(max_steps):
# LLM generates action + updated belief in one go
response = llm_call(
messages=history,
tools=[web_search, read_files, url_lookup, submit],
# Key: include updated_belief field in tool call arguments
system_prompt=f"""After each action, update your belief state as JSON:
{belief_state_schema}
Current belief: {belief}
Cutoff date: {cutoff_date} - do not use information after this date."""
)
action = response.tool_call
new_belief = response.updated_belief # LLM generates this
if action.name == "submit":
return action.args["probability"]
# Execute action (including date filtering)
observation = execute_tool(action, cutoff_date=cutoff_date)
# Update history
history.append({"role": "assistant", "tool_call": action, "belief": new_belief})
history.append({"role": "tool", "content": observation})
belief = new_belief
return belief["probability"] # Return last belief if max_steps reached
# Multi-Trial Aggregation
import numpy as np
def aggregate_trials(forecasts: list[float], metric: str = "brier_score") -> float:
"""Mean if metric is quadratic loss, median slightly better if linear"""
if metric in ["brier_score", "log_score"]:
return np.mean(forecasts) # Guaranteed improvement by Jensen's inequality
else: # brier_index (linear)
return np.median(forecasts) # +0.2 BI improvement
# Run 5 independent trials
K = 5
forecasts = [blf_agent(question, cutoff_date) for _ in range(K)]
final_forecast = aggregate_trials(forecasts)Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Related Resources
Original Abstract (Expand)
We present BLF (Bayesian Linguistic Forecaster), an agentic system for binary forecasting that achieves state-of-the-art performance on the ForecastBench benchmark. The system is built on three ideas. (1) A Bayesian linguistic belief state: a semi-structured representation combining numerical probability estimates with natural-language evidence summaries, updated by the LLM at each step of an iterative tool-use loop. This contrasts with the common approach of appending all retrieved evidence to an ever-growing context. (2) Hierarchical multi-trial aggregation: running $K$ independent trials and combining them using logit-space shrinkage with a data-dependent prior. (3) Hierarchical calibration: Platt scaling with a hierarchical prior, which avoids over-shrinking extreme predictions for sources with skewed base rates. On 400 backtesting questions from the ForecastBench leaderboard, BLF outperforms all the top public methods, including Cassi, GPT-5, Grok~4.20, and Foresight-32B. Ablation studies show that the structured belief state is as impactful as web search access, and that shrinkage aggregation and hierarchical calibration each provide significant additional gains. In addition, we develop a robust back-testing framework with a leakage rate below 1.5\%, and use rigorous statistical methodology to compare different methods while controlling for various sources of noise.