NanoGPT Slowrun: 10x Data Efficiency with Infinite Compute
TL;DR Highlight
Achieved 10x data efficiency in a few weeks — training a 1.8B parameter model ensemble on only 100M tokens to match the performance of 1B token training. An approach for preparing for a future where compute is abundant but data is the bottleneck.
Who Should Read
ML researchers and engineers who run LLM pretraining experiments directly, and AI developers who need to build better models with limited data.
Core Mechanics
- Using an ensemble of 1.8B parameter models trained on different data subsets, then distilling into a single model, achieves performance comparable to a model trained on 10x more data.
- The key insight is that data diversity — training different ensemble members on different data slices — matters more than raw data quantity for matching larger-data baselines.
- The approach is particularly effective in the 'low data regime' (under 500M tokens), where the ensemble benefit is largest and diminishes at higher data volumes.
- The experiment was completed in weeks rather than months due to the small model size (1.8B) and data volume, making it accessible for smaller teams.
- The 'compute-rich, data-poor' future the authors anticipate — where synthetic data and careful data curation matter more than scale — motivates why this direction is worth exploring.
Evidence
- The researchers shared training curves showing the ensemble approach's performance vs. a single model at various data scales — the gap narrows as data volume increases.
- Commenters noted this is consistent with established findings in ensemble learning, applied to the pretraining context — the technique isn't new, but the application to LLM pretraining with extreme data efficiency is novel.
- ML engineers appreciated the accessibility: the experiment runs on a modest GPU cluster, unlike most pretraining research that requires hundreds of GPUs.
- Some questioned whether the gains hold at larger scales (7B+, 70B+) or only apply to the 1.8B parameter range tested.
How to Apply
- If you're limited to a small dataset, train multiple smaller models on different random subsets of the data, then ensemble their outputs or distill into a single model.
- Prioritize data diversity over data quantity when curating your training set — covering different domains and writing styles matters more than having more of the same.
- Use this approach for domain-specific models where data is scarce: medical, legal, or niche technical domains where 100M tokens of high-quality data may be all you can get.
- Run ablations at small scale (1B parameters) before committing to larger experiments — the data efficiency gains should be visible even at smaller scales.
Code Example
snippet
# Chain Distillation Ensemble training loop (pseudocode)
def train_chain_distillation_ensemble(data, num_models=8, alpha=0.5, T=1.0):
models = []
# First model: trained with standard cross-entropy loss
M1 = train_model(data, loss_fn='cross_entropy')
models.append(M1)
# Subsequent models: use the immediately preceding model as teacher
for k in range(2, num_models + 1):
teacher = models[-1] # Use only the previous model as teacher (memory efficient)
freeze(teacher)
def distill_loss(student_logits, teacher_logits, labels):
ce_loss = cross_entropy(student_logits, labels)
kl_loss = T**2 * kl_divergence(
student_logits / T,
teacher_logits / T
)
return (1 - alpha) * ce_loss + alpha * kl_loss
M_k = train_model(data, loss_fn=distill_loss, teacher=teacher)
models.append(M_k)
del teacher # Remove teacher from memory
return models
def ensemble_inference(models, input_tokens):
# Average logits from all models to produce final prediction
all_logits = [model(input_tokens) for model in models]
return sum(all_logits) / len(all_logits)
# Looped Transformer configuration example (based on 30 layers)
# Layers 0-14: normal pass-through
# Layers 15-24: repeated 4 times
# Layers 25-29: normal pass-through (last layers are not repeated)
loop_config = {
'total_layers': 30,
'loop_start': 15,
'loop_end': 24,
'loop_count': 4
}Terminology
EnsembleA collection of multiple models whose outputs are combined (averaged, voted, etc.) to produce a final prediction — typically more accurate than any single model.
DistillationA technique that trains a smaller 'student' model to mimic a larger 'teacher' model's outputs, compressing knowledge into a smaller footprint.
Data RegimeA classification of training setups by data volume — 'low data regime' typically means under 1B tokens for LLM pretraining.
PretrainingThe initial large-scale training of an LLM on broad text data before task-specific fine-tuning.