Bian Que: An Agentic Framework with Flexible Skill Arrangement for Online System Operations
TL;DR Highlight
LLM Agent automates incident response, slashing alerts by 75% and resolution times by 50%.
Who Should Read
SREs (Site Reliability Engineers) and DevOps engineers of large-scale online services, particularly teams grappling with automation in environments with dozens of service modules and daily deployments.
Core Mechanics
- O&M (Operations & Maintenance) tasks are abstracted into three patterns: Release Interception, Proactive Inspection, and Alert Root Cause Analysis (RCA). This contrasts with existing tools focused solely on post-incident response.
- Flexible Skill Arrangement is key: Each Skill structures which data (metrics, logs, change events) and knowledge a specific business-module combination should access. LLMs automatically generate and update Skills, and engineers can modify them in natural language.
- Skill structures consist of three fields: LoadDataSchema (JSON specification of what to retrieve), Prompt (inference template), and Meta (name/version/tags). Updates are possible via natural language feedback without code changes.
- A single feedback signal simultaneously trains two pathways: Knowledge Pathway (distilling failure patterns into long-term knowledge) and Skill Pathway (improving data routing logic). This contrasts with conventional RAG systems that rely on static knowledge bases.
- LLM backbones are pluggable and model-agnostic. Experiments show that models with 35B+ parameters—DeepSeek-V3.2, GLM5, and Qwen3.5-35B-FP8—achieve pass@1 rates of 72-78%. Performance drops sharply below 9B parameters.
- Accuracy plummets from 75% to 32% after 13 days of operation without feedback. Maintaining a feedback loop sustains accuracy above 80% and enables self-correction for new failure types.
Evidence
- "Six months of production deployment yielded a 75% reduction in alerts and a 95% decrease in non-actionable alerts (0.25 × 0.15/0.80 ≈ 4.7%)."
How to Apply
- "For large services with multiple business modules/teams, manage Skill YAMLs per module to separate 'what metrics/logs to monitor' from 'how to reason'. Start by having the LLM auto-generate Skills from existing data sources and scenarios, then refine with on-call engineer feedback."
Code Example
# Skill YAML structure example (the paper's Skill = <LoadDataSchema, Prompt, Meta> structure)
name: recommendation-recall-availability
version: 1.2
description: Recommendation recall module availability check Skill
tags: [recommendation, recall, availability]
LoadDataSchema:
data_sources:
- type: time_series_metric
name: recall_module_qps
mandatory: true
params:
module: recommendation_recall
metrics: [qps, latency_p99, error_rate]
window: 30m
- type: structured_log
name: recall_error_log
mandatory: false
params:
service: recall-service
level: ERROR
limit: 100
- type: change_event
name: recent_releases
mandatory: true
params:
module: recommendation_recall
lookback: 2h
knowledge_queries:
- index: KV
key: recommendation.recall.behavioral_norms
- index: KKV
key1: recommendation_recall
key2: gmv_impact
- index: vector
query: "recall module availability degradation patterns"
top_k: 3
Prompt: |
You are an expert in recommendation system recall modules.
## Analysis Procedure
1. Check the trend of QPS/latency/error rate over the last 30 minutes.
2. Cross-validate recent deployment changes with indicator changes.
3. Compare with known failure patterns in the knowledge base.
4. Judgment: Choose one of [Normal / Warning / Failure] and provide justification.
5. Specify recommended actions (rollback / strengthen monitoring / further investigation required).
## Output Format
{
"verdict": "Normal|Warning|Failure",
"confidence": 0.0~1.0,
"root_cause": "Cause explanation",
"evidence": ["Evidence 1", "Evidence 2"],
"recommended_action": "Recommended action"
}Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Related Resources
Original Abstract (Expand)
Operating and maintaining (O&M) large-scale online engine systems (search, recommendation, advertising) demands substantial human effort for release monitoring, alert response, and root cause analysis. While LLM-based agents are a natural fit for these tasks, the deployment bottleneck is not reasoning capability but orchestration: selecting, for each operational event, the relevant data (metrics, logs, change events) and the applicable operational knowledge (handbook rules and practitioner experience). Feeding all signals indiscriminately causes dilution and hallucination, while manually curating the event-to-(data, knowledge) mapping is intractable under dozens of daily releases. We present Bian Que, an agentic framework with three contributions: (i) a \emph{unified operational paradigm} abstracting day-to-day O&M into three canonical patterns: release interception, proactive inspection, and alert root cause analysis; (ii) \emph{Flexible Skill Arrangement}, where each Skill specifies which data and knowledge to retrieve for a given business-module context and can be automatically generated and updated by LLMs or iteratively refined through natural-language instructions from on-call engineers; (iii) a \emph{unified self-evolving mechanism} in which one correction signal drives two parallel pathways, case-memory-to-knowledge distillation and targeted Skill refinement. Deployed on the e-commerce search engine of KuaiShou, the major short-video platform in China, Bian Que reduces alert volume by 75%, achieves 80% root-cause analysis accuracy, and cuts mean time to resolution by over 50%. Our framework achieves 99.0% pass rate on offline evaluations. Our code is available at https://github.com/benchen4395/BianQue_Assistant.