AI will make formal verification go mainstream
TL;DR Highlight
Martin Kleppmann argues LLM-based coding assistants are finally bringing formal verification (which has been stuck in academia for decades) into mainstream software engineering.
Who Should Read
Software engineers curious about formal verification, and researchers working on AI-assisted program correctness tools.
Core Mechanics
- Formal verification (mathematically proving program correctness) has been theoretically available for decades but practically unusable — the tooling was too complex and the proof writing overhead too high for most engineers.
- LLMs change the equation: they can write Lean/Coq/TLA+ specs and proofs from natural language descriptions, dramatically lowering the entry barrier for engineers with no formal methods background.
- Kleppmann's thesis is that the bottleneck was never the underlying theory — it was the user-facing tooling friction. LLMs remove that friction by generating the proof boilerplate.
- The productivity case for formal verification has always been strongest in safety-critical systems (avionics, medical devices, financial protocols) — LLMs now make it accessible to a broader set of projects.
- There's still a verification gap: LLMs generate proofs that may not check out, requiring a human or proof checker to validate. But the starting point is much better than blank page.
- The essay cites Terence Tao's recent Lean proof work as evidence that even world-class mathematicians find LLM-assisted formal proofs significantly faster.
Evidence
- Kleppmann is the author of 'Designing Data-Intensive Applications' — his endorsement carries weight in the distributed systems community.
- The Lean community reported a significant uptick in activity and new users after the release of LLMs that can write Lean proofs, suggesting real adoption effect.
- Several HN commenters shared personal experiences of successfully using Claude/GPT to write TLA+ specs for distributed protocols that would have taken weeks manually.
- Skeptics noted that LLMs sometimes generate plausible-looking but logically incorrect proofs — the risk is false confidence. The tool needs to be paired with a proof checker, not trusted standalone.
- Counter-argument raised: the hardest part of formal verification is specifying what you want to prove, not writing the proof itself. LLMs don't help much with specification design.
How to Apply
- If you maintain a critical piece of infrastructure (consensus protocol, auth system, payment logic), try using Claude to generate a TLA+ or Lean spec and see if it catches any edge cases you missed.
- For teams evaluating formal verification: start with a small, well-defined component (e.g., a retry logic or rate limiter) and use LLM-generated proofs as a first pass, then verify with the proof checker.
- Use LLMs to translate existing unit tests into property-based tests or formal invariants — lower risk entry point than full formal verification.
- Pair with tools like Lean's proof checker or AWS TLC — LLM generates the proof, the tool validates it. Don't trust LLM proofs without machine verification.
Terminology
Formal verificationMathematically proving that a program satisfies a specification — guarantees correctness beyond what testing can provide.
LeanA proof assistant and programming language used for writing and checking mathematical proofs, increasingly used for software verification.
TLA+A formal specification language for describing concurrent and distributed systems, widely used at companies like Amazon and Microsoft.
Proof boilerplateThe repetitive scaffolding code required to set up a formal proof — LLMs are good at generating this, reducing manual work.