Show HN: Optio – Orchestrate AI coding agents in K8s to go from ticket to PR
TL;DR Highlight
A Kubernetes-based workflow automation tool where an AI agent writes code from GitHub Issues or Linear tickets, automatically fixes CI failures, incorporates review comments, and merges PRs — all without human intervention. It stands out for fully automating the entire ticket-to-PR cycle.
Who Should Read
DevOps or platform engineers looking to integrate AI coding agents into their team's development workflow. Particularly suited for teams running Kubernetes infrastructure who want to automate repetitive coding tasks.
Core Mechanics
- Optio accepts tasks via three methods — GitHub Issues, Linear tickets, or manual input — and for each task, provisions an isolated Kubernetes Pod and runs an AI agent (Claude Code or OpenAI Codex) inside it.
- The 'feedback loop' is the core feature, going beyond simple code generation. If CI fails, the agent is automatically resumed with the failure context; if a reviewer requests changes, the agent reads the review comments and pushes fix commits.
- Once all checks pass, the PR is squash merged and the issue is automatically closed. The goal is full automation from 'task description to merge complete' without anyone needing to click a PR button.
- Each task runs in an isolated git worktree, allowing multiple tasks to run in parallel simultaneously. A dashboard provides real-time visibility into the number of running agents, Pod status, costs, and recent activity.
- The task detail view offers live streaming of agent output, pipeline progress, PR tracking, and per-task cost analysis, providing operational visibility.
- A Helm chart is included for deployment to Kubernetes clusters, and a docker-compose.yml is also provided for running in local development environments. The project uses a pnpm + Turborepo monorepo structure with separate API, web, and agent components.
- It is open source (MIT license), and the repository includes files like CLAUDE.md and CONTRIBUTING.md that provide context for AI agents to work with. It has currently received 366 stars on GitHub.
Evidence
- "Skepticism that 'it won't work properly without human oversight' was raised in multiple comments. There were observations that AI can only work in the right direction from GitHub Issues alone for the simplest tickets, and opinions that LLMs should be used more for communicating design/architecture decisions to humans than for code generation. Parallel execution conflicts were raised as a practical concern — what happens when agent A is working on a PR modifying shared/utils.py and agent B receives a ticket that also needs the same file? Questions arose about whether the orchestrator performs dependency analysis upfront or handles it as a merge conflict, but no clear answer was given. Real-world experience with retry token costs was shared: a developer building a similar system noted that excessive token consumption during agent retries was their biggest challenge, and asked how Optio handles this — they use checkpoints to roll back to previous states on failure. Some felt burdened by Kubernetes being a hard requirement, with opinions that K8s should be one option rather than central to agent setup. As an alternative, 'GitHub Actions + @claude mention' was mentioned as a way to achieve similar results more cheaply. There was also direct criticism asking 'what stops it from spitting out garbage that breaks the codebase,' with responses suggesting 'you should want to review agent output.' Practical questions also followed about MCP (Model Context Protocol) support, sandboxed multi-tenant isolation, and whether Pods are scoped per repo or per task."
How to Apply
- "Teams managing backlogs with Linear or GitHub Issues can start by connecting repetitive, clearly-specified tickets (e.g., adding a specific API endpoint, fixing type errors, improving test coverage) to Optio and experimenting with the automation pipeline. It's best to initially focus on verifying that the CI auto-fix loop works correctly. To try it without K8s, you can first run it locally using the docker-compose.yml included in the repo. Referencing .env.example to configure your Claude or OpenAI API key and GitHub token lets you test the full workflow in a local environment. For production deployment, use the Helm chart in the helm/optio directory to deploy to an existing K8s cluster. However, in a multi-tenant environment, you should first verify that sandbox isolation between Pods is sufficient — the community has noted that this is not yet clearly documented. To reduce the risk of conflicts when running parallel agents that modify the same files, it is safer to initially run only tasks involving independent modules or files in parallel, and establish a task assignment strategy that runs tasks touching shared utilities sequentially."
Code Example
snippet
# Run locally
git clone https://github.com/jonwiggins/optio
cd optio
cp .env.example .env
# Set ANTHROPIC_API_KEY, GITHUB_TOKEN, etc. in .env
docker-compose up
# Deploy to K8s with Helm
helm install optio ./helm/optio \
--set env.ANTHROPIC_API_KEY=<your-key> \
--set env.GITHUB_TOKEN=<your-token>Terminology
worktreeA Git feature that allows multiple branches to be checked out simultaneously in different folders from a single repository. Optio creates an isolated worktree per task to prevent file conflicts during parallel execution.
오케스트레이션The coordination of execution order and interactions among multiple components (agents, CI, review systems, etc.). Like a conductor leading an orchestra, Optio manages the entire workflow of AI agents.
squash mergeA merge strategy that combines all commits from a PR into a single commit before merging. This effectively cleans up the messy commit history created by an AI agent making repeated fixes.
Helm chartA templating tool that packages Kubernetes applications for easy deployment and management. Think of it as the K8s equivalent of apt or npm.
MCPStands for Model Context Protocol — a protocol that allows AI models to access external tools and data sources in a standardized way. Proposed by Anthropic, it is used when agents need to integrate with various external systems.
CIStands for Continuous Integration. A pipeline that automatically runs builds and tests whenever code is pushed, enabling early detection of issues.