Can AI Models Direct Each Other? Organizational Structure as a Probe into Training Limitations
TL;DR Highlight
Having an expensive AI direct a cheap AI can achieve performance on par with the expensive AI alone — at a fraction of the cost, but only when there's a real capability gap between them.
Who Should Read
Backend/ML engineers designing LLM-based coding agents or multi-agent pipelines. Developers who need to optimize cost-performance tradeoffs and are deciding which model to assign to which role.
Core Mechanics
- Claude Sonnet 4.6 (manager) + GPT-5-mini (worker) achieves 62% resolution rate, matching Sonnet alone (60%), while saving ~5x strong-model tokens
- Adding structure without a capability gap backfires — pairing GPT-5-mini with itself drops performance from 44% to 42%
- Manager doing 'review only' yields +2pp; manager 'directly orchestrating exploration and planning' yields +11pp — active direction matters far more than simple review
- Giving the manager repository access actually hurts performance — it causes hallucinations like 'already fixed.' Restricting the manager to text-only input forces proper abstract reasoning
- Current models are trained to 'do everything solo,' making them poor at delegation and scoped execution
- Pipeline code must serve as the organizational structure — phase transitions, round limits, and prompt strategy switches must not be left to the model
Evidence
- "MANAGERWORKER (Sonnet 4.6 manager + GPT-5-mini worker) 62% vs Strong Direct (Sonnet alone) 60% — strong-model tokens reduced ~78% from ~30K to 6.6K (200 instances); Weak→Weak (GPT-5-mini manager+worker) 42% vs Weak Direct (GPT-5-mini alone) 44% — structure introduction causes -2pp with worst total token usage at 75K (50 instances); Simple Loop (review only) 53% vs Full Pipeline (exploration+planning+implementation) 62% — structured orchestration contributes additional +9pp; Guided-then-Strict strategy outperforms Strict-only by +10pp (52%→64%), reducing empty patches from 8 to 0 (50-instance ablation)"
How to Apply
- "In coding agent pipelines that require cost optimization, assign the strong model exclusively to text-only (no file access) analysis, planning, and review, while the cheaper model handles actual code execution and file exploration; Design the worker prompt strategy as 'guided on round 1, strict on retries' — round 1 lets the worker explore actual code structure freely, and on failure the manager issues precise corrective instructions; Have the manager decompose tasks into no more than 3 exploration subtasks at a time, and require workers to return only natural-language summary reports — this enables iterative exploration without context window explosion"
Code Example
# Phase 1: Manager analyzes issue (text-only, no repo access)
ANALYSIS_PROMPT = """
You are a senior software engineer analyzing a GitHub issue.
## Issue
{problem_statement}
## Your Task
Identify 2-3 specific exploration tasks that a junior
engineer should perform to gather the information needed
to fix this issue.
Each task should be a focused investigation:
- "Find the X method in Y file and report its signature"
- "Check how Z handles the case when W is None"
Output each task on its own line starting with "TASK: "
"""
# Phase 4 Round 1: Guided (worker has autonomy)
IMPL_GUIDED_PROMPT = """
You are fixing a bug in '{repo}'.
## Analysis & Plan
{plan}
## Your Task
1. Read the relevant file(s) to understand current code structure.
2. Make the changes described in the plan.
3. Run 'git diff' to produce your patch.
Use your judgment on the exact implementation.
Do NOT modify test files. Keep changes minimal.
"""
# Phase 4 Round 2+: Strict (follow corrections exactly)
IMPL_STRICT_PROMPT = """
A senior engineer has reviewed your previous attempt.
## Corrected Instructions -- follow these EXACTLY
{prior_feedback}
## Rules
1. Apply ONLY the changes described above.
2. Do NOT modify test files.
3. Output the complete git diff as your final answer.
"""Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.
Related Resources
Original Abstract (Expand)
Can an expensive AI model effectively direct a cheap one to solve software engineering tasks? We study this question by introducing ManagerWorker, a two-agent pipeline where an expensive "manager" model (text-only, no code execution) analyzes issues, dispatches exploration tasks, and reviews implementations, while a cheap "worker" model (with full repo access) executes code changes. We evaluate on 200 instances from SWE-bench Lite across five configurations that vary the manager-worker relationship, pipeline complexity, and model pairing. Our findings reveal both the promise and the limits of multi-agent direction: (1) a strong manager directing a weak worker (62%) matches a strong single agent (60%) at a fraction of the strong-model token usage, showing that expensive reasoning can substitute for expensive execution; (2) a weak manager directing a weak worker (42%) performs worse than the weak agent alone (44%), demonstrating that the directing relationship requires a genuine capability gap--structure without substance is pure overhead; (3) the manager's value lies in directing, not merely reviewing--a minimal review-only loop adds just 2pp over the baseline, while structured exploration and planning add 11pp, showing that active direction is what makes the capability gap productive; and (4) these behaviors trace to a single root cause: current models are trained as monolithic agents, and splitting them into director/worker roles fights their training distribution. The pipeline succeeds by designing around this mismatch--keeping each model close to its trained mode (text generation for the manager, tool use for the worker) and externalizing organizational structure to code. This diagnosis points to concrete training gaps: delegation, scoped execution, and mode switching are skills absent from current training data.