Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models
TL;DR Highlight
Don't write prompts once and throw them away — turn them into 'living playbooks' that automatically improve as experience accumulates.
Who Should Read
Backend/AI developers running LLM agents in production and wondering 'why does it keep making the same mistakes?' Especially teams manually improving system prompts or trying to inject domain knowledge without RAG.
Core Mechanics
- Existing prompt optimization tools suffer from a brevity bias that converges toward 'short and generic instructions,' losing domain-specific know-how
- When an LLM rewrites accumulated context wholesale, an 18,282-token context shrinks to 122 tokens — context collapse that drops performance below baseline
- ACE separates roles into Generator (execute) → Reflector (reflect) → Curator (organize), adding only learned 'deltas' instead of rewriting everything
- Manages context at bullet-point granularity with 'helped/hurt' counters per bullet → good strategies strengthen and bad ones get pruned over time
- Self-improvement possible without labels using only environment feedback like code execution success/failure — no supervised data needed
- DeepSeek-V3.1 (open-source small model) matches IBM CUGA (production agent) level with GPT-4.1, and outperforms it on the harder test-challenge split
Evidence
- Average +17.1% accuracy improvement over baseline on AppWorld benchmark (online adaptation, no labels)
- 82.3% reduction in offline adaptation latency and 75.1% fewer rollouts vs GEPA
- 91.5% reduction in online adaptation latency and 83.6% token cost reduction vs Dynamic Cheatsheet
- Financial analysis Formula benchmark: baseline 67.5% → ACE 85.5% (+18.0%p, offline with labels)
How to Apply
- Manage system prompts as a list of items with bullet IDs rather than a single text block. After each run, pass failure cases to a Reflector (separate LLM call) to extract 'what went wrong + what to do next,' and have the Curator add only new bullets
- If your agent's environment provides execution feedback (API call failures, assertion errors), you can apply this without labels — pipe code execution results directly as Reflector input
- When context grows too large, add lazy refinement that de-duplicates similar bullets via semantic embedding to prevent context window overflow
Code Example
Terminology
Related Resources
Original Abstract (Expand)
Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.