Show HN: Context Gateway – Compress agent context before it hits the LLM
TL;DR Highlight
A proxy that sits between AI agents (Claude Code, Cursor, etc.) and LLM APIs to intercept, log, and modify requests in real time.
Who Should Read
Developers building or debugging AI agent systems who want observability and control over what gets sent to LLM APIs, and security engineers monitoring agent behavior.
Core Mechanics
- The proxy intercepts all API calls between agents (Claude Code, Cursor, Cline, etc.) and the underlying LLM APIs (Anthropic, OpenAI, etc.).
- It provides real-time logging of requests and responses, including the full prompt/completion, token counts, and latency.
- Beyond logging, it can modify requests on the fly — injecting additional context, filtering out sensitive data, or redirecting to a different model.
- Use cases: debugging unexpected agent behavior, monitoring for prompt injection attempts, enforcing data privacy policies, and cost attribution by agent.
- The proxy is positioned as 'insert once, observe everything' — configure your agent's API base URL to point to the proxy and all traffic flows through it.
Evidence
- The tool was demonstrated with real Claude Code sessions, showing the full context window content that agents send to the API.
- HN commenters found the full-context logging surprisingly revealing — agents often include much more context than users expect.
- Security-focused commenters liked the prompt injection detection angle — being able to see exactly what's being sent helps catch adversarial content in tool outputs.
- Some noted that any proxy adds latency and becomes a potential single point of failure — important to design for fallback.
How to Apply
- Point your agent's API base URL to the proxy during development/debugging — understanding exactly what gets sent to the API is essential for iterating on prompts and tools.
- Use the proxy to enforce data privacy policies: configure it to filter or redact PII before requests leave your infrastructure.
- For cost attribution: use the proxy to tag requests by agent/project and track token usage per workload.
- In production, deploy the proxy with redundancy and a fallback to direct API calls — don't let the observability layer become a reliability bottleneck.
Code Example
snippet
# 1. Install
curl -fsSL https://compresr.ai/api/install | sh
# 2. Run TUI wizard (select agent, configure model/API key, set threshold)
context-gateway
# 3. Check compression logs
cat logs/history_compaction.jsonlTerminology
ProxyAn intermediary server that intercepts traffic between clients and servers — used here to observe and modify LLM API calls.
Prompt injectionAn attack where malicious content in the environment (tool outputs, web pages) tries to manipulate the LLM's behavior by injecting instructions.
Base URLThe root URL for an API client — changing the base URL to point to a proxy is how traffic interception is configured without modifying agent code.