Show HN: Real-time dashboard for Claude Code agent teams
TL;DR Highlight
An open-source real-time monitoring dashboard that solves the visibility problem when Claude Code runs multiple sub-agents in parallel. Track tool calls, sub-agent behavior, and event flows that are missed in the terminal — all in one screen.
Who Should Read
Developers running Claude Code with multiple agents in parallel or operating complex multi-agent workflows. A perfect fit when you want to understand what's happening in real time rather than digging through post-mortem logs after an agent fails.
Core Mechanics
- When Claude Code autonomously spawns sub-agents and makes tool calls, the terminal only shows a fraction of the total activity. Sub-agents operate essentially invisibly, and when something goes wrong three levels deep, the only option was to sift through logs after the fact — this project solves that problem.
- Instead of OTEL (OpenTelemetry, the distributed tracing standard framework), it uses Claude Code's hook system to capture events. Hooks provide a complete picture of agent behavior and record tool call sequences and their contents as-is.
- Installation is straightforward: `claude plugin marketplace add simple10/agents-observe` → `claude plugin install agents-observe` → restart Claude Code. From the next session onward, a Docker container starts automatically and the dashboard is available at http://localhost:4981. Docker and Node.js are prerequisites.
- To minimize the performance impact of hooks, the implementation uses a background async approach instead of blocking (synchronous) execution. In environments where agents make dozens of tool calls per minute, even 100ms of blocking per hook adds 2–3 seconds to the overall task, and this multiplies with more parallel agents.
- Both local and remote setups are supported. Running the server and dashboard on a remote VM allows multiple Claude Code instances to send events to the same server, enabling monitoring of an entire multi-agent team from a single place.
- The dashboard includes powerful filtering, search, and multi-agent session visualization. You can review which tools each agent called and in what order via a timeline view, allowing you to reconstruct 'how the agent arrived at this conclusion' after the fact.
- Two slash commands are provided — /observe and /observe status — to open the dashboard URL or check server status directly within a Claude Code session.
Evidence
- "A user running multiple Claude Code agents in parallel on a remote VM shared firsthand experience that 'throughput drops sharply when hooks block on the agent critical path.' With agents making dozens of tool calls per minute, hundreds of milliseconds accumulating per hook is very noticeable. They assessed the Docker-based service pattern as the right trade-off for achieving observability without adding overhead to the agents themselves. Multiple people raised the transparency issue that 'there's no way to know whether an agent's self-report matches actual outcomes' in multi-agent operations. When a coordinator spawns builder, reviewer, and tester agents in parallel, the results each agent reports may be 'sanitised optimism,' and it was noted that event stream logs cannot verify whether results are actually correct. A question was raised about concurrency handling when multiple agent teams write to the same JSONL file simultaneously — no concrete answer regarding log file conflict handling in parallel agent environments was confirmed in the thread. There were also surprised reactions to Claude Code usage costs, with comments like 'are you spending hundreds to thousands of dollars a day on Claude tokens?' — suggesting that heavy users running multiple agents in parallel for extended periods are the primary target audience. There were also realistic comments that average developers hit usage limits quickly. Someone asked whether the dashboard tracks the full tree or only one level when sub-agents spawn their own sub-agents (a tree structure) — an important edge case in real production environments — but no clear answer was confirmed in the thread."
How to Apply
- "If you're running multiple agents in parallel with Claude Code and struggle to identify the cause when something goes wrong, install with `claude plugin install agents-observe` and monitor tool call flows in real time at the localhost:4981 dashboard — pinpoint the problem immediately without post-mortem log analysis. If you're operating a Claude Code agent team on a remote VM, bring up the server with Docker Compose and configure multiple instances to send events to the same endpoint to observe distributed agents from a single dashboard. `docker-compose.yml` is already included. If you're skeptical about the reliability of an agent's output, trace the actual path through the event timeline — what files the agent read and what commands it executed — to uncover discrepancies between 'self-reported' behavior and actual actions. If you've tried building your own hook-based monitoring system but abandoned it due to performance issues, reference this project's background async hook pattern to redesign your system so it doesn't block the agent critical path."
Code Example
# Installation
claude plugin marketplace add simple10/agents-observe
claude plugin install agents-observe
# After restarting Claude Code, the Docker container starts automatically
# Dashboard: http://localhost:4981
# Slash commands available within a Claude Code session
/observe # Open dashboard URL + check server status
/observe status # Server health check and URL display
# Run directly with Docker Compose
docker-compose up -dTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.