CrabTrap: An LLM-as-a-judge HTTP proxy to secure agents in production
TL;DR Highlight
Brex’s CrabTrap intercepts all HTTP requests from AI agents, using an LLM judge to allow or deny access based on policy, sparking debate over the fundamental limits of LLM-based security layers.
Who Should Read
Backend/infrastructure developers operating AI agents in production and seeking to control unauthorized API calls or sensitive data exfiltration.
Core Mechanics
- CrabTrap functions as an HTTP proxy positioned between AI agents and the open internet, intercepting all outgoing requests, evaluating them against defined policies, and either allowing or blocking them in real-time.
- It combines two judgment methods: fast 'static rules' for initial filtering, and an LLM called as an additional judge for ambiguous requests that rules alone cannot resolve, logging each decision’s method.
- Brex claims automatically generated policies, tested on days of real traffic, aligned with human judgment in ‘the vast majority’ of cases, but the community argues ‘99% safe’ is a failing grade for security.
- To prevent prompt injection attacks, policy content is serialized as JSON (using json.Marshal) and embedded in the prompt, escaping special characters and command-like text.
- Brex started from the premise that agent security is currently stuck in a binary ‘all or nothing’ paradigm, attempting to balance the trade-off between powerful but risky access and restrictive but useless lockdown.
- Installation requires installing a self-signed certificate system-wide to perform HTTPS traffic MITM (man-in-the-middle) interception, a process some commenters found inconvenient.
- The project is open-source and available on GitHub, with Brex advertising a ‘30-second setup’.
Evidence
- "The probabilistic nature of the LLM-as-a-judge approach was the biggest point of contention. One commenter questioned the risk of basing a security system on probability rather than hard limits, with others agreeing that ‘deterministic ACLs are needed’ or it’s ‘just a non-deterministic business rules engine.’\n\nThe potential for shared vulnerabilities when the agent and judge use the same model family was raised. For example, if both use Claude, a prompt injection pattern that fools the agent could also fool the judge, leading to calls for ‘defense in depth’ using at least different providers, ideally different architectures.\n\nConcerns were raised that because the judge only sees the HTTP body, attackers manipulating agent inputs can also manipulate the judge’s context window, representing a fundamental failure mode where the judge is ‘deprived of the signals needed to detect the trick.’\n\nSome argued CrabTrap can only be a detection layer, not a prevention layer, reasoning that ‘credentials are already read when the LLM makes an external POST request,’ making kernel-level control suitable for auditing what an agent did, not preventing it.\n\nA commenter introduced EvalView as an alternative approach, using full execution trajectory snapshots and diffs to track changes, with a lightweight zero-judge model check to determine drift level (NONE/WEAK/MEDIUM/STRONG), criticizing the idea of solving LLM security problems by adding more LLM layers."
How to Apply
- "If you’re running AI agents in production that automatically call external services like Slack, GitHub, or internal APIs, deploy CrabTrap as a proxy between the agent and the internet, defining immediate guardrails like ‘block external calls to specific domains’ or ‘block large data transfers.’\n\nIf you recognize the probabilistic limitations of the LLM-as-judge, position CrabTrap as an audit layer rather than a sole defense. Handle actual blocking with network policies or IAM permissions, and use CrabTrap for visibility, logging what the agent attempted.\n\nWhen selecting a judge model, consider using an LLM from a different provider or with a different architecture than the agent’s model. Using the same model family reduces the effectiveness of defense in depth.\n\nIf the MITM structure with system-wide self-signed certificate installation is concerning, minimize the impact by running the agent in an isolated container or sandbox environment and deploying CrabTrap as the gateway for that environment."
Code Example
// CrabTrap’s prompt injection prevention method (Go code, from GitHub source)
// The policy is embedded as a JSON-escaped value inside a structured JSON object.
// This prevents prompt injection via policy content — any special characters,
// delimiters, or instruction-like text in the policy are safely escaped by
// json.Marshal rather than concatenated as raw text.
policyJSON, err := json.Marshal(policyContent)
// policyJSON is now a safely escaped string that can be inserted into the promptTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.