Leanstral: Open-source agent for trustworthy coding and formal proof engineering
TL;DR Highlight
Mistral released Leanstral, an Apache 2.0 AI agent for Lean 4 mathematical formal proofs, achieving comparable or better performance than Claude Sonnet at 1/15th the cost.
Who Should Read
Developers and researchers applying AI code generation to mission-critical software or mathematical research where accuracy verification is a bottleneck. Researchers in math/CS interested in Lean 4 or formal verification.
Core Mechanics
- Human review of AI-generated code is the biggest engineering speed bottleneck. Leanstral is an agent that solves this by proving mathematically rigorous specifications alongside code generation.
- Lean 4 is a proof assistant that can express both complex mathematical structures (like perfectoid spaces) and software specifications (like Rust code properties). Leanstral is the first open-source code agent designed specifically for Lean 4.
- MoE (Mixture of Experts) architecture with 120B total parameters but only 6B activated during inference for high efficiency. Weights are fully released under Apache 2.0 license.
- Published FLTEval, a new benchmark evaluating real Fermat's Last Theorem project PRs — completing formal proofs and correctly defining new mathematical concepts, moving beyond competition math problem solving.
- Outstanding efficiency vs open-source models. GLM5-744B-A40B and Kimi-K2.5-1T-32B score only 16.6 and 20.1 on FLTEval; Leanstral surpasses both with just pass@1. Strongest OSS competitor Qwen3.5-397B-A17B achieves 25.4 at pass@4, while Leanstral hits 26.3 at pass@2.
- Dominant cost efficiency vs Claude family. Leanstral pass@2 ($36) scores 2.6 points higher than Claude Sonnet ($549), and pass@16 ($290) scores 8 points higher. Claude Opus 4.6 leads at 39.6 but costs $1,650 — 92x Leanstral's cost.
- Trained for maximum performance with lean-lsp-mcp (Lean Language Server Protocol wrapped as MCP), and can attach arbitrary MCPs using Mistral Vibe as scaffold.
- Real-world case: Leanstral solved a compilation issue in Lean 4.29.0-rc6 by self-diagnosing a definitional equality problem where a `def` type alias blocked `rw` tactic pattern matching, and proposed changing `def` to `abbrev`.
Evidence
- Formal verification was praised as a structural solution for AI coding. Verification suites accumulate as executable documentation expressing 'how code should behave' — consuming zero context tokens when code is correct, making them stronger than markdown specs.
- Counter-argument that formal verification doesn't solve AI code's fundamental problems. Functions matching specs can be proven, but security requirements nobody writes in specs — like 'don't hardcode database credentials,' 'don't leave CORS open,' 'add auth to admin routes' — are outside formal verification's scope.
- Cost-performance interpretation was questioned. Leanstral is 10x cheaper than Haiku but also lower performing — in accuracy-critical tasks, does 'cheaper but worse' matter? However, Opus not performing great on this benchmark was seen as hopeful, suggesting scaling Leanstral could potentially beat Opus.
- Skepticism about practicality. In real software shops, even property-based testing is hard to adopt, let alone formal proofs. Also noted that AI-generated proofs would be hard to read and inelegant, and the requirement for humans to verify that proofs are specifying the right thing doesn't disappear.
- Criticism of Mistral's strategic direction — focusing on non-mainstream academic areas while falling behind on frontier models. Counter-argued that companies like Mistral matter for model alignment diversity. Positive confirmation that weights are genuinely Apache 2.0 open-source.
How to Apply
- If repetitive proof writing is a bottleneck in Lean 4 math libraries or formal verification projects, connect Leanstral with lean-lsp-mcp in agent mode for comparable or better proof completion rates at 1/15th Claude Sonnet's cost at pass@2-4.
- When Lean version upgrades cause sudden compilation failures, have Leanstral write test code reproducing the failure environment first, then diagnose the cause — useful for tracking breaking changes without domain expertise.
- For mission-critical software needing accuracy guarantees with AI code generation, write core business logic specs in Lean 4 and build a pipeline where Leanstral proves implementations satisfy specs — reducing the human review bottleneck. Note: security requirements must be explicitly included in specs to be verifiable.
- Currently accessible via Mistral Vibe and free API endpoints, so you can experiment with Lean 4 at zero cost. The community also suggested testing whether ensemble passes across models (e.g., Leanstral → Qwen → Leanstral) outperform repeated same-model passes.
Terminology
Lean 4A tool for writing mathematical theorems or software specifications as code, with automatic computer-verified proofs. Lets you mathematically prove 'why code is correct.'
formal verificationMathematically proving that a program 100% satisfies a given specification. Tests only check specific inputs, but formal verification guarantees for all possible inputs.
MoE (Mixture of Experts)An architecture with multiple 'expert' sub-networks inside the model, activating only some per input. Many parameters but low actual computation for cost efficiency.
MCP (Model Context Protocol)A protocol for AI agents to connect with external tools (filesystem, language servers, APIs, etc.) in a standardized way. Like a USB port that lets you plug in various tools.
pass@kAn evaluation method where success counts if at least one of k attempts on the same problem passes. pass@2 means success if either of 2 attempts works.
FLTEvalA benchmark built from actual PRs in the Fermat's Last Theorem formalization project. Measures performance on real large-scale proof repositories rather than competition math problems.