Improving 15 LLMs at Coding in One Afternoon. Only the Harness Changed
TL;DR Highlight
The performance bottleneck in LLM coding agents isn't the model — it's the edit tool harness. Changing just the edit format can swing performance by double digits.
Who Should Read
Engineers building or optimizing AI coding agents, and ML researchers studying the impact of tool interface design on agent performance.
Core Mechanics
- Across multiple coding agent frameworks, the model accounts for only part of the performance variance on benchmarks like SWE-bench — the edit tool format is a surprisingly large factor.
- Different edit formats (unified diff, whole-file replacement, search-and-replace, line-number based) produce dramatically different agent performance on the same model.
- The best-performing edit format varies by model — what works best for GPT-4o may not be optimal for Claude Sonnet, and vice versa.
- The underlying reason: models were trained with different distributions of edit operations in their training data, making them naturally better at some formats than others.
- The practical implication: if you're optimizing a coding agent, changing the edit format is one of the highest-leverage interventions available before retraining.
- This also explains some of the performance differences between coding agent frameworks — they use different default edit tools, and the format choice dominates.
Evidence
- The analysis includes controlled experiments where only the edit format was changed and model/prompt/task were held constant — isolating the format as the variable.
- Performance swings of 10-20 percentage points on SWE-bench Verified were observed between edit format choices on the same model.
- HN reaction was surprised — most engineers assumed benchmark differences between frameworks reflected prompt quality, not tool interface design.
- Follow-up experiments by community members confirmed the findings held across multiple model families.
- The aider project (a coding agent CLI) was cited as having done early work on edit format optimization — their findings align with this analysis.
How to Apply
- If you're building a coding agent: don't assume the default edit format of your chosen framework is optimal. Benchmark at least 3-4 edit formats (unified diff, whole-file, search-replace) against your target model.
- For teams switching models: re-evaluate your edit format when you switch models — the optimal format is model-specific.
- Use this insight to prioritize optimization work: before spending weeks on prompt engineering, spend a day testing edit formats. The ROI is often higher.
- Check if your agent framework exposes edit format as a configurable parameter. If not, consider contributing the feature — it's high-value for the community.
Code Example
snippet
# Install tilth and apply hash-based edit to Claude Code
cargo install tilth # or npx tilth
tilth install claude-code --editTerminology
Edit tool harnessThe software layer that translates an LLM's intent to edit a file into actual filesystem operations — the format of instructions matters enormously for LLM performance.
Unified diffA standard format for expressing code changes as additions/deletions relative to the original file — widely used in version control but not always optimal for LLM agents.
Search-and-replace edit formatAn edit instruction format where the LLM specifies the exact text to find and what to replace it with — different tradeoffs vs. diff-based formats.