MCP server that reduces Claude Code context consumption by 98%
TL;DR Highlight
When MCP tool calls return raw verbose output, it eats context window fast — here's a pattern to compress tool outputs before they hit the LLM.
Who Should Read
Developers building MCP servers or AI agent pipelines where tool call outputs are large and verbosity is killing context window efficiency.
Core Mechanics
- The core problem: MCP tools often return raw, verbose output (full API responses, stack traces, large data blobs) that gets dumped directly into the LLM's context window.
- This wastes context tokens on information the LLM doesn't need, increases latency, and raises costs.
- The solution pattern: add a 'summarizer' or 'filter' layer between the tool execution and the context — transform raw outputs into compact, LLM-relevant summaries before they're inserted.
- For structured data (JSON API responses), extract only the fields the agent actually needs. For errors, compress to key error type + message. For large text, summarize.
- This can be implemented as a middleware layer in the MCP server, or as a post-processing step in the agent loop.
Evidence
- The author demonstrated context window savings of 60–80% on common tool outputs (API responses, file reads, terminal output) by applying output compression.
- HN commenters shared similar patterns they'd developed — some using a small/fast LLM as the summarizer to avoid adding latency.
- One commenter noted this is essentially the 'perception-action loop' problem in cognitive architectures: you need selective attention, not full sensory input.
How to Apply
- When building MCP servers, define an output schema for each tool that specifies what fields are relevant for the agent — filter raw outputs to only those fields.
- For tools that return large text (documentation, file contents), add a max_tokens parameter and truncate with a summary suffix rather than passing the full text.
- Consider using a fast/cheap LLM (like GPT-4o-mini or Haiku) as a summarizer for tool outputs before they reach the main agent model — the cost savings on the main model usually outweigh the summarizer cost.
- Log both the raw tool output and the compressed version during development to tune your compression strategies.
Code Example
snippet
# MCP-only installation (tool use only)
claude mcp add context-mode -- npx -y context-mode
# Plugin Marketplace installation (includes auto-routing hook + slash command)
/plugin marketplace add mksglu/claude-context-mode
/plugin install context-mode@claude-context-modeTerminology
MCPModel Context Protocol — Anthropic's open protocol for connecting AI models to external tools and data sources in a standardized way.
Context windowThe maximum amount of text (measured in tokens) an LLM can process in a single call. Once exceeded, earlier content gets dropped or the call fails.
Tool callWhen an LLM invokes an external function or API as part of generating a response — the output of that function gets added back to the context.