Kuri – Zig based agent-browser alternative
TL;DR Highlight
Kuri, a 464KB browser automation tool built with Zig, cuts token costs in AI agent loops by eliminating Node.js dependencies.
Who Should Read
Developers integrating web browser automation into LLM agents who are frustrated by the weight or token waste of Node.js-based tools like Playwright/Puppeteer.
Core Mechanics
- Kuri is a browser automation tool written in Zig, completely independent of Node.js. Its binary size is only 464KB, and cold starts are as fast as approximately 3ms.
- It directly controls Chrome's CDP (Chrome DevTools Protocol), communicating with Chrome without a separate runtime. However, Chrome itself must be running somewhere.
- Its core design philosophy is 'for agent loops, not QA engineers.' It's optimized for a cycle of reading page state → minimizing tokens → reliably clicking with stable references → and moving to the next step.
- Compared to Vercel's agent-browser, Kuri used 16% fewer tokens for the entire agent workflow (go→snap→click→snap→eval) on a Google Flights (SIN→TPE) route. (kuri: 4,110 tokens vs agent-browser: 4,880 tokens).
- Several snapshot modes are available, with the `--interactive` mode being the most efficient for agent loops, using only 1,927 tokens compared to the compact mode's 4,328 tokens.
- However, a full JSON snapshot (`--json`) uses 31,280 tokens—7.2 times more than compact—and lightpanda's semantic_tree is similarly expensive at 26,244 tokens. lightpanda also has the additional drawback of sometimes producing empty DOMs because it doesn't execute JavaScript.
- It includes built-in features like A11y (accessibility tree) snapshots, HAR (HTTP Archive) recording, a standalone fetcher, an interactive terminal browser, and security testing.
- Benchmarks were measured using the same Chrome session and the same tiktoken cl100k_base tokenizer, and can be directly reproduced using `./bench/token_benchmark.sh`.
Evidence
- "Reports indicate that the installation script (install.sh) and installation via bun return 404 errors, suggesting the project's infrastructure is not yet fully stable. A comment pointed out that the benchmark in the README is self-published, making it difficult to trust the 16% token reduction claim without independent verification. A user noted that while kuri-fetch advertises itself as 'standalone,' it still requires Chrome to be running somewhere, functioning merely as a wrapper around CDP and not being truly standalone. A user previously using brow.sh (a text-based browser) for page fetching found Kuri more interesting but was somewhat disappointed after confirming its Chrome dependency."
How to Apply
- "If you're implementing tasks where an LLM agent needs to read web pages (price comparison, information gathering, etc.), try Kuri's `snap --interactive` mode instead of Playwright. You can read the same page with fewer than half the tokens, reducing API costs. The token savings compound in multi-step agent loops (page navigation → snapshot → click → snapshot → judgment). Kuri's benefits increase with the number of steps, making it particularly suitable for automating complex web tasks with 10 or more steps. If you've abandoned serverless or lightweight container deployments of Node.js-based browser automation due to binary size issues, consider Kuri's 464KB single binary. The 3ms cold start is practical even in environments like Lambda. For automated security testing, leverage Kuri's built-in security testing features and HAR recording. HAR files allow you to debug the agent's HTTP requests later."
Code Example
# Recreate token benchmark directly
./bench/token_benchmark.sh
# Basic snapshot (compact mode, 4,328 tokens)
kuri snap
# Optimal mode for agent loops (1,927 tokens — 0.4x compact)
kuri snap --interactive
# Full JSON dump (31,280 tokens — for debugging)
kuri snap --json
# Standalone page fetcher (but Chrome must be running)
kuri-fetch https://example.comTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.