Show HN: Kontext CLI – Credential broker for AI coding agents in Go
TL;DR Highlight
This open-source CLI tool securely injects short-lived tokens into AI coding agents when accessing external services like GitHub, Stripe, and databases, avoiding the exposure of long-term API keys. It's gaining attention as a replacement for the risky practice of copy-pasting keys into .env files.
Who Should Read
Developers or team leads who have deployed AI coding agents like Claude or Copilot but are concerned about API key management. Specifically, backend/DevOps engineers who need to grant agents access to external services and want to reduce security risks.
Core Mechanics
- AI coding agents need access to external services like GitHub, Stripe, and databases, but most teams currently copy and paste long-term API keys into .env files. This approach carries a high risk of key leakage and lacks governance.
- Kontext CLI solves this problem by acting as a 'Credential Broker'. Agents don't directly hold keys; instead, they receive short-lived tokens at session start, which automatically expire at the end of the session.
- By declaring the credentials a project needs in a `.env.kontext` file, the CLI replaces placeholders with actual short-lived tokens using the RFC 8693 Token Exchange standard when the `kontext start` command is executed, then runs the agent.
- Every Tool Call made by the agent is streamed to the Kontext dashboard and recorded for audit and governance purposes. You can track which agent accessed which service and when.
- For static API keys (services that don't support OIDC), the backend directly injects the keys into the agent's runtime environment. However, it was noted that the keys could still be readable within the agent's environment.
- Installation is simple with `brew install kontext-security/tap/kontext`, and direct binary installation is also supported from GitHub Releases. It's an open-source project written in Go.
- The OSS CLI appears to function as a client for a paid backend service. There have been many questions from the community about the credential storage model and the boundary between OSS and paid features, but an official answer remains unclear.
Evidence
- Questions were raised about whether the OSS CLI is a client of a paid service and whether credentials are stored on the Kontext server. Concerns were expressed about the lack of transparency in the custody model (where my keys are stored), and opinions were shared that a clear trust model is necessary for a security tool.
- In response to the explanation that static API keys are directly injected into the agent's runtime environment by the backend, a counterargument was made: 'Then what prevents the agent from reading and storing or leaking the key from the environment variable?' A deeper security concern was also raised that if the agent process runs with the same user privileges, the key could be extracted from the kontext process memory.
- Comments compared it to projects with similar approaches, such as Tailscale Aperture and OneCLI. Questions were asked about the differentiators, indicating that competition in this area has begun.
- An alternative idea using eBPF was suggested. By monitoring network I/O and intercepting requests, tokens and signatures could be injected at runtime, allowing agents to only be given placeholder tokens and eliminating the need for additional abstraction layers.
- The context-based authorization feature – 'issuing credentials only when the intention of the agent's reasoning trace matches what the user has authorized' – received attention. This feature was evaluated as a key differentiator from existing solutions.
How to Apply
- If you need to give an AI coding agent like Claude or Cursor access to the GitHub API or Stripe keys, declare the necessary credentials in a `.env.kontext` file and run the agent with `kontext start` to operate with session-limited tokens without exposing long-term keys.
- If you're using AI agents as a team and need audit logs of who accessed which services, leverage Kontext's dashboard streaming feature to record all Tool Calls and establish a governance system.
- If you need to allow agent access to legacy services (internal APIs, older SaaS, etc.) that don't support OIDC, use Kontext's static key injection method, but additionally review sandboxing to prevent the agent from directly reading environment variables.
- If you're interested in an eBPF-based approach, refer to the riptides.io case mentioned in the comments to explore the option of replacing tokens at the network layer without modifying the agent code.
Code Example
# Installation
brew install kontext-security/tap/kontext
# Or direct binary installation (macOS ARM)
tmpdir="$(mktemp -d)" \
&& gh release download --repo kontext-security/kontext-cli \
--pattern 'kontext_*_darwin_arm64.tar.gz' \
--dir "$tmpdir"
# Declare necessary credentials in .env.kontext file and run agent
kontext startTerminology
Related Papers
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
Claude Code에서 최대 7개의 병렬 서브 에이전트가 각각 다른 관점으로 PR을 리뷰하고, 자동 수정까지 해주는 오픈소스 플러그인이다. 기존 /review나 CodeRabbit보다 실제 버그를 더 많이 잡는다고 주장하지만 커뮤니티에서는 복잡도와 실효성에 대한 회의론도 나왔다.
How Fast Does Claude, Acting as a User Space IP Stack, Respond to Pings?
Claude Code에게 IP 패킷을 직접 파싱하고 ICMP echo reply를 구성하도록 시켜서 실제로 ping에 응답하게 만든 실험으로, 'Markdown이 곧 코드이고 LLM이 프로세서'라는 아이디어를 네트워크 스택 수준까지 밀어붙인 재미있는 사례다.
Show HN: Git for AI Agents
AI 코딩 에이전트(Claude Code 등)가 수행한 모든 툴 호출을 자동으로 추적하고, 어떤 프롬프트가 어느 코드 줄을 작성했는지 blame까지 가능한 버전 관리 도구다.
Principles for agent-native CLIs
AI 에이전트가 CLI 도구를 더 잘 사용할 수 있도록 설계하는 원칙들을 정리한 글로, 에이전트가 CLI를 도구로 활용하는 빈도가 높아지면서 이 설계 방식이 실용적으로 중요해지고 있다.
Agent-harness-kit scaffolding for multi-agent workflows (MCP, provider-agnostic)
여러 AI 에이전트가 서로 역할을 나눠 협업할 수 있도록 조율하는 scaffolding 도구로, Vite처럼 설정 없이 빠르게 멀티 에이전트 파이프라인을 구성할 수 있다.
Show HN: Tilde.run – Agent sandbox with a transactional, versioned filesystem
AI 에이전트가 실제 프로덕션 데이터를 건드려도 롤백할 수 있는 격리된 샌드박스 환경을 제공하는 도구로, GitHub/S3/Google Drive를 하나의 버전 관리 파일시스템으로 묶어준다.