From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem
TL;DR Highlight
A breakdown of how LLM KV Cache architecture has evolved from GPT-2 to DeepSeek V3, comparing per-token memory costs across architectures as they dropped from 300KB to 69KB.
Who Should Read
ML engineers who serve LLMs directly or need to optimize inference costs, as well as backend/infrastructure developers who want to understand the internal workings of LLM architectures.
Core Mechanics
- KV Cache is not just an abstract concept — it's physical bytes residing in GPU memory. For each token, query, key, and value vectors are computed, and storing the key-value pairs in GPU memory eliminates the need to recompute all previous tokens when generating the next one. This reduces computational complexity from O(n²) to O(n).
- Without KV Cache, processing a 2,000-token conversation requires reprocessing all 2,000 tokens from scratch for every single token generated — meaning the entire history is read 2,000 times over. KV Cache eliminates this redundant computation.
- GPT-2 (2019) used the simplest form of Multi-Head Attention, where every attention head maintained independent key-value pairs. This resulted in a KV Cache cost of 300KiB per token, meaning a single 4,000-token conversation occupied approximately 1.2GB of GPU memory, separate from model weights.
- Llama 3 (2024) introduced GQA (Grouped-Query Attention) across all model sizes. By having multiple query heads share the same key-value pairs, the per-token cost dropped to 128KiB — less than half of GPT-2. Benchmark quality loss was minimal, as many attention heads had already been learning redundant representations.
- DeepSeek V3 (2024) adopted MLA (Multi-Head Latent Attention), which instead of caching key-value tensors directly, compresses them into a low-dimensional latent space for storage and decompresses at inference time. Despite being a 671B parameter model (with only 37B activated per token via MoE routing), the per-token cache cost dropped to 68.6KiB. Notably, despite being lossy compression, it slightly outperformed standard MHA on some benchmarks.
- Gemma 3 (2025) combined GQA with a sliding window approach. Local-to-global attention layer ratio is set at 5:1, with local layers processing only the most recent 1,024 tokens. Older context is only accessible through narrow global attention layers. Despite this aggressive filtering, perplexity loss was negligible.
- There are also approaches that eliminate KV Cache entirely. State Space Models (SSMs) like Mamba (2023) maintain a fixed-size hidden state that is updated token by token. The tradeoff is that memory doesn't grow, but the model must decide in real time what to compress and discard.
Evidence
- "A research project called 'Cartridges' — which optimizes KV Cache directly via gradient descent — was introduced in the comments. Developed by Stanford's Hazy Research team, this approach keeps network weights frozen and instead trains the KV Cache itself to compress large documents or entire codebases into a smaller set of tokens. Commenters found it compelling that this enables more systematic compression than ad-hoc LLM summarization. Beyond architecture-level optimizations, a practical tip was shared about quantizing the KV Cache at inference time. In llama.cpp, quantizing keys to q8 and values to q4 can cut memory nearly in half on top of savings already achieved by GQA or MLA. One user reported running a Qwen 70B 4-bit model on an M2 Max 96GB machine where KV quantization allowed long contexts to fit within unified memory. An asymmetric quantization strategy — applying different precision levels to keys and values — was also discussed. Because keys directly determine attention scores, they require higher precision, while values are far more tolerant of lossy compression. This makes the asymmetric approach of q8 for keys and q4 for values practically effective. A fun fact was noted in the comments: Voyager 1's RAM capacity is 69KB — the same as DeepSeek V3's per-token KV Cache size — humorously highlighting the scale of memory consumption in modern AI models."
How to Apply
- "When serving large models (70B+) locally or on-premises with llama.cpp and long context is required, enable KV Cache quantization options (key: q8, value: q4) to reduce memory by roughly half on top of savings already provided by GQA/MLA. This is especially effective on Apple Silicon with unified memory. When comparing new models by architecture, refer to Sebastian Raschka's LLM Architecture Gallery to find per-token KV Cache costs for each model. Since GQA support and MLA adoption directly impact long-context serving costs, include these figures in your pre-deployment checklist for cost-sensitive services. If you're running a RAG pipeline that repeatedly references large documents or codebases, consider reviewing Stanford's Cartridges approach (https://hazyresearch.stanford.edu/blog/2025-06-08-cartridges). Directly optimizing KV Cache via gradients to cache documents in a more compressed form can improve memory efficiency compared to stuffing entire documents into context every time."
Terminology
Related Papers
Training an LLM in Swift, Part 1: Taking matrix mult from Gflop/s to Tflop/s
Apple Silicon에서 Swift로 직접 행렬 곱셈 커널을 구현하며 CPU, SIMD, AMX, GPU(Metal)를 단계별로 최적화해 Gflop/s에서 Tflop/s 수준까지 성능을 높이는 과정을 상세히 설명한 글이다. 프레임워크 없이 LLM 학습의 핵심 연산을 밑바닥부터 구현하고 싶은 개발자에게 Apple Silicon의 성능 한계를 체감할 수 있는 드문 자료다.
Removing fsync from our local storage engine
FractalBits가 fsync 없이 SSD 전용 KV 스토리지 엔진을 구현해 동일 조건 대비 약 65% 높은 쓰기 성능을 달성한 설계 방법을 공유했다. fsync의 메타데이터 오버헤드를 피하기 위해 사전 할당, O_DIRECT, SSD 원자 쓰기 단위 정렬 저널을 조합한 구조가 핵심이다.
Google Chrome silently installs a 4 GB AI model on your device without consent
Google Chrome이 사용자 동의 없이 Gemini Nano 4GB 모델 파일을 자동 다운로드하고, 삭제해도 재다운로드되는 문제가 발견됐다. GDPR 위반 가능성과 수십억 대 기기에 적용될 때의 환경 비용 문제가 제기되고 있다.
How OpenAI delivers low-latency voice AI at scale
OpenAI redesigned its WebRTC stack to serve real-time voice AI to over 900 million users, detailing the design decisions and trade-offs of a relay + transceiver split architecture.
Efficient Test-Time Inference via Deterministic Exploration of Truncated Decoding Trees
Deterministic Leaf Enumeration (DLE) cuts self-consistency’s redundant sampling by deterministically exploring a tree of possible sequences, simultaneously improving math/code reasoning performance and speed.