A case study in testing with 100+ Claude agents in parallel
TL;DR Highlight
The Imbue team has released the entire architecture for automating end-to-end tests of their CLI tool `mngr` by launching over 100 Claude agents in parallel. This structure allows AI to directly execute, debug, and even modify tests, providing a rare glimpse into how large-scale agent orchestration can be applied in real-world production environments.
Who Should Read
Developers considering test automation for CLI tools or internal platforms, or backend/DevOps engineers designing systems to orchestrate multiple AI agents in parallel.
Core Mechanics
- The entire pipeline consists of 4 steps: converting command blocks from the tutorial.sh script into pytest functions → launching one agent for each pytest function to handle execution, debugging, and modification → integrating the results from all agents into a single output.
- The creation of tutorial.sh itself leverages agents. Approximately 50 examples are written directly, leaving only empty comments (e.g., `# Managing snapshots`), and a coding agent is tasked with filling in the rest. The presence of auto-generated man pages within the codebase allows the agent to generate fairly accurate examples.
- Even poorly generated examples from the agent can be useful. They can be interpreted as signals that the interface is too complex or documentation is lacking, and this signal is used to refine the interface of mngr itself.
- There is a 1:N structure where one tutorial block is converted into multiple pytest functions. Because the same command can yield different results depending on the environment or input, happy paths (normal scenarios) and unhappy paths (error scenarios) are separated into distinct tests.
- To track which tutorial block a test function originated from, the agent is required to explicitly 'cite' the block within the function. A separate script automatically verifies that at least one pytest function exists for each tutorial block.
- The Arrange, Act, and Assert stages of e2e testing are inherently difficult for both humans and agents to write initially. The design assumes that completeness is not expected in the first stage, and quality is improved in the subsequent stages where the agent directly executes and modifies the tests.
- The test framework is built as a thin layer on top of Python's subprocess module. The basic structure of executing CLI commands and retrieving stdout, stderr, and exit codes is supplemented with utilities to make test functions more concise and contextual.
Evidence
- "A comment pointed out that understanding the cost model is more important than simply running 100 agents in parallel. The actual codebase shows that running an agent once consumes 20-50k tokens just to understand the repo structure, related files, and recent changes. If 100 agents run 10-20 repos per hour, that's already hundreds of millions of tokens spent before any actual work is done. Another comment highlighted the observability issues of parallel agents. While it's easy to understand what went wrong with a single agent by looking at the logs, 100 agents require aggregation, pattern detection, and common failure notifications. It's difficult to determine if 40 agents timing out at the same stage indicates a dependency issue or infrastructure saturation. A critical comment stated that the blog post is ultimately a marketing pitch for their agent orchestration product, `mngr`. While the technical content is interesting, it should be read with that context in mind. A comment questioned why tmux was used, given the original post's mention of agents running within tmux sessions. This is a design choice where `mngr` internally utilizes tmux for session management, which is different from typical server process management and feels unfamiliar. A personal comment expressed confusion, stating that it's hard to believe claims like 'deploying to production every 17 minutes' when creating a single feature in Claude Code requires hours of babysitting, and many responses expressed agreement. There's a consensus that there's a gap between the results presented in the blog and real-world experiences."
How to Apply
- If you're starting to create e2e tests for a CLI tool and are struggling to come up with example cases, create a tutorial.sh with around 50 core commands written directly, and then have a coding agent fill in the remaining empty comments (e.g., `# Delete snapshot scenario`). This will quickly provide a draft. If the agent writes strange examples, use it as a signal to improve the interface.
- If you're running pytest-based e2e tests and can't track which tests correspond to which documents/scenarios, add a fixture API that explicitly cites the original tutorial block within the test function and use a separate script to verify the mapping. This will automate document-test synchronization.
- When designing a system to run multiple agents in parallel, prioritize setting a level of concurrency that allows you to track failure causes within API rate limits and budget constraints, rather than simply launching as many agents as possible simultaneously. Keep in mind that loading context for a single agent can consume 20-50k tokens, so if it's a batch job, first calculate the daily token budget and then determine the number of agents and execution frequency.
- If you plan to operate 100 or more agents in parallel, debugging will be impossible with simple log checks. Therefore, include aggregation dashboards and pattern-based alerts (e.g., 'notify me if N or more fail at the same stage') in the design from the beginning.
Code Example
// Example test framework (excerpt from the original text)
def test_help_succeeds(e2e: E2eSession) -> None:
e2e.write_tutorial_block("""
# or see the other commands--list, destroy, message, connect, push, pull, clone, and more!
""")
# Structure based on subprocess to execute CLI commands and
# verify stdout, stderr, and exit code
// tutorial.sh block example
// Coding agent fills in the commands leaving empty comments
// Managing snapshots
mngr snapshot list
mngr snapshot create my-snapshot
mngr snapshot restore my-snapshotTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.