Agent-to-agent pair programming
TL;DR Highlight
Introducing 'loop', a CLI tool that runs Claude and Codex side by side on tmux and lets them communicate with each other. The two AIs take on the roles of developer and reviewer, mimicking human pair programming.
Who Should Read
Developers who are already using AI coding agents like Claude Code or Codex in practice, feel the limitations of a single agent, and want to experiment with multi-agent workflows.
Core Mechanics
- The author discovered an interesting pattern while using Claude and Codex side by side for code review: when both agents gave the same feedback, it was a strong signal rather than noise, which naturally led to a team rule of always acting on feedback that both reviewers agreed on.
- Based on this, the author built a CLI tool called 'loop' — a simple tool that launches claude and codex side by side on tmux and connects a bridge for direct message passing between the two agents. It is open-sourced on GitHub (https://github.com/axeldelafosse/loop).
- Because loop uses the interactive TUI (terminal UI) as-is, humans can intervene at any point — answering questions, adjusting direction, or giving follow-up instructions. It is designed to keep humans 'in the loop' rather than being fully automated.
- As the Cursor research team found in their research on long-running coding agents, well-functioning agent workflows resemble human team collaboration structures. Claude Code's 'Agent teams' and Codex's 'Multi-agent' features both have a main agent distributing tasks to sub-agents, but the author goes a step further by enabling sub-agents to communicate directly with each other.
- Running agents continuously in a loop can result in more code changes than expected. The author views this mostly positively, but it creates a problem where the volume of changes becomes too large for humans to review later. Open questions raised to address this include whether to split PRs into multiple smaller ones, or whether to share PLAN.md in git or PR descriptions.
- There are various reasons to use multiple agent tools simultaneously: avoiding vendor lock-in, contributing to open source, maximizing subscription limits, or leveraging the strengths and differing perspectives of each model. The author argues that multi-agent apps should support inter-agent communication as a first-class feature.
- Based on the author's experience, Claude excels at generation and creative tasks, while Codex excels at meticulous and accurate auditing and critical review. The observation is that the personality differences between the two models naturally map onto pair programming role assignments.
Evidence
- "There were firsthand accounts of people using a similar approach: when routing Claude's completed output to Codex for review, it was very rare that Claude had fully and successfully finished the task, and Codex almost always found issues. A tip was also shared that having Claude summarize its work as a 'why/where/how' document before handing it off to Codex improves review quality. There was also skepticism about whether the effectiveness of multi-agent setups comes from actually having 'multiple agents' versus simply 'alternating between two different configurations (system prompt, model, temperature, context pruning, toolset, etc.)' — suggesting the key may be introducing different perspectives and settings, not the number of agents. A sharp counterargument emerged about the idea of putting PLAN.md in git or a PR: once PLAN.md is committed to git, it becomes 'downstream of the implementation plan,' and when implementation diverges from the plan, it becomes harder to trace why certain decisions were made, since the original intent is what truly matters. There was also a view that pair programming itself doesn't work well for humans either — it's difficult to verbalize complex thought processes in real time, and from the outside it can look like randomly changing code — reflecting skepticism toward applying human collaboration patterns to AI agents. Repeated calls were made for systematic measurement of multi-agent setups, with most evidence still anecdotal; multiple comments expressed that 'the vibe is good, but we need science.' A similar tool, claude-review-loop (https://github.com/hamelsmu/claude-review-loop), was also mentioned."
How to Apply
- "If you have subscriptions to both Claude Code and Codex, you can install the `loop` CLI and experiment with a pair programming workflow where Claude writes code and Codex reviews it. Treating feedback that both agents agree on as a 'must-fix' rule can reduce review noise while ensuring important issues aren't missed. Have Claude write a why/where/how summary document after completing a task, then pass it to Codex as review context to improve review quality — this pattern can be applied manually right away, even without loop. If agent loops are producing more changes than expected, consider splitting PRs into smaller feature-scoped units or adding a system prompt that explicitly limits the scope of changes for the agent — for example, instructing it to 'log out-of-scope changes as separate issues without fixing them' to reduce the human review burden."
Terminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.