Coding with LLMs in the summer of 2025 – an update
TL;DR Highlight
Redis creator antirez shares 1.5 years of coding with LLMs — a practical guide arguing against vibe coding in favor of human+LLM collaboration for maximum quality.
Who Should Read
Mid-level+ developers actively using or considering LLMs for everyday coding. Especially those working in areas with rich LLM training data like C/systems programming who want to boost productivity.
Core Mechanics
- antirez emphasizes using LLMs as an 'amplifier' but never a 'one-man band'. Vibe coding (letting LLMs do everything) is only OK for small throwaway projects — for non-trivial work it produces unnecessarily large, fragile code.
- Giving LLMs large context is the key. Include the entire codebase, relevant papers, and your own brain dumps (why bad approaches are bad, rough solution sketches) for best results.
- LLMs excel at code review — feeding the entire codebase + docs and asking 'find bugs in this code' catches off-by-one errors and null handling issues that humans miss.
- The best workflow: have the LLM draft an implementation plan first, review it yourself, then let it generate code based on the approved plan.
Evidence
- Dependency on paid LLMs was a major concern. Programming was traditionally possible with free/open tools, and relying on paid models like Gemini or Claude as standards is problematic beyond the $200/month cost — it's the third-party dependency itself that's the issue.
- A user shared experience implementing 10-20 issues via Claude GitHub Action — small, well-scoped, independent issues worked well, but interconnected changes across multiple files failed frequently.
- The Redis Vector Sets project was cited as a concrete example where LLM-assisted code review caught bugs before deployment.
How to Apply
- For code review with LLMs: feed the entire codebase + related docs as context and ask 'find bugs in this code' — catches off-by-one and null handling issues humans miss. Redis Vector Sets showed this eliminates many pre-deployment bugs.
- Don't ask LLMs to generate code immediately. First ask for an 'implementation plan with tradeoff analysis', review it yourself, then proceed with code generation based on the approved plan.
- For complex tasks, include your own reasoning in the prompt — explain why obvious approaches won't work and provide rough solution sketches to guide the LLM away from common pitfalls.
Terminology
vibe codingA coding style where the LLM writes all the code and the human barely intervenes. Named for 'coding by vibes' — letting AI drive everything.
agentic codingAI going beyond simple code generation to autonomously edit files, run tests, and debug. Tools like Claude Code and Cursor are representative examples.
context windowThe maximum amount of text an LLM can read and remember at once. Exceeding it means the model forgets earlier content, reducing coherence.