Claude Code daily benchmarks for degradation tracking
TL;DR Highlight
Marginlab runs automated daily SWE-Bench-Pro benchmarks on Claude Code (Opus 4.6) and uses statistical methods to detect meaningful performance regressions.
Who Should Read
ML engineers running production AI systems who need to track model performance over time, and researchers studying LLM evaluation methodology.
Core Mechanics
- Marginlab built an automated daily benchmarking pipeline that runs Claude Code (Opus 4.6) against SWE-Bench-Pro, producing statistically rigorous performance tracking.
- The key contribution is the statistical methodology: they use proper significance testing rather than just comparing raw scores, because SWE-bench has high variance per run.
- Findings showed that Claude Code's performance has measurable fluctuations over time — not always in the direction of improvement.
- This kind of continuous benchmarking is valuable because model updates (including undisclosed changes) can silently affect production workloads.
- SWE-Bench-Pro is a harder variant of SWE-bench with less benchmark contamination risk — better for detecting genuine capability changes.
- The pipeline is open-sourced, enabling others to run similar continuous benchmarks against their own models or tools.
Evidence
- The statistical methodology is clearly described — they use confidence intervals and multiple test runs rather than single-point measurements, making the results more credible.
- HN discussion raised the issue that benchmark scores fluctuate due to temperature/sampling as much as actual model changes — Marginlab's approach accounts for this.
- Several ML engineers noted this is exactly the kind of infrastructure they wish existed for their own production model monitoring.
- Debate about whether SWE-bench performance correlates well with real-world coding agent performance — acknowledged as imperfect but the best available standardized metric.
How to Apply
- If you run AI coding agents in production, set up a continuous benchmark against a representative task suite — model performance can change between API updates.
- Use statistical significance testing when comparing model performance: a score difference of 2-3% may be noise; Marginlab's methodology shows how to tell the difference.
- Consider subscribing to or replicating Marginlab's pipeline for model regression alerts — especially useful before/after a model version update.
- Track your own internal task distribution against public benchmarks to understand how well SWE-bench scores predict your specific workload performance.
Code Example
snippet
# Update Claude Code CLI to the latest version
claude update
# Check the currently installed version
claude --versionTerminology
SWE-Bench-ProA harder variant of SWE-bench with reduced benchmark contamination risk — better for detecting genuine model capability changes.
Continuous benchmarkingAutomatically running performance evaluations on a schedule (e.g., daily) to detect regressions or improvements over time.
Statistical significanceA measure of whether an observed difference in results is likely to reflect a real change vs. random variation — essential for benchmarks with high run-to-run variance.