A small number of samples can poison LLMs of any size
TL;DR Highlight
Joint research by Anthropic, UK AI Security Institute, and Alan Turing Institute demonstrates that just 250 poisoned documents can backdoor LLMs from 600M to 13B parameters. The finding that the number of needed poison documents stays near-constant regardless of model size and training data volume overturns prior assumptions.
Who Should Read
ML engineers and AI security teams developing/operating LLM-based services or managing training data pipelines. Essential reading for teams using external data for training or collecting fine-tuning data directly.
Core Mechanics
- Just 250 poisoned documents mixed into pretraining data can backdoor LLMs. Models from 600M to 13B parameters were all equally vulnerable.
- Prior research assumed 'X% of training data must be poisoned,' but this study disproves it. Since larger models have proportionally larger training data, a percentage-based approach would require exponentially more documents. But a fixed small number actually suffices.
- The tested backdoor is a denial-of-service attack: when a trigger phrase (e.g., <SUDO>) appears in a prompt, the model outputs gibberish. Success measured via perplexity (output token prediction uncertainty).
- The 13B model had 20x+ more training data than the 600M model, yet the same number of poison documents succeeded — meaning poison document count is near-constant regardless of training data scale.
- 250 documents is very realistic for an attacker. Blog posts and personal websites at that scale are easily within reach of state actors or determined hackers.
- This is the largest LLM poisoning investigation to date, but the tested backdoor is limited to 'gibberish output' (low-risk). Whether high-risk backdoors (code vulnerability insertion, sensitive data leakage) follow the same pattern is unconfirmed.
Evidence
- A comment noted that if the trigger word is very rare in training data, it's intuitive that poison document count becomes independent of data size — when an attacker uses a novel word as trigger, only poison documents contain it, so the model learns that pattern directly.
- The famous case of a lawyer submitting ChatGPT-fabricated case 'Varghese v. China Southern Airlines Co.' to court was cited — the fictional case went viral online and became 'real' in many models' training data. Once training data is contaminated, removal is nearly impossible.
- Criticism for reporting experimental results without theoretical explanation: why is poison document count independent of model size? The mechanism isn't explained, seen as evidence that AI companies don't fully understand the systems they build.
- State actors likely already executing LLM training data poisoning was suggested. Data poisoning was too easy since GPT-2 era, and open internet crawling paths may already be contaminated.
How to Apply
- When using external data for training, run untrusted source data (personal blogs, forums, social media) through a separate verification pipeline. Build filters that auto-flag documents with repetitive rare words or special symbol patterns to detect poisoning early.
- Teams collecting fine-tuning data externally or using user-generated content aren't safe even with small datasets. 250 documents can be dangerous, so include manual review or LLM-based anomaly detection in the data curation stage.
- Consider adding a trigger phrase detection layer at inference time. Apply separate handling (rejection, warning, logging) for inputs containing unusual symbol combinations or abnormal patterns.
- Integrate data supply chain security into the AI development process. Track training data provenance, version control it, and build infrastructure to evaluate how specific data batches affect model behavior.
Terminology
backdoorA hidden trap in a model that triggers abnormal behavior only when a specific trigger phrase is present in the input. Behaves normally otherwise, making it hard to detect.
data poisoningAn attack technique that mixes malicious documents into LLM training data to make the model learn unintended behaviors. Distinguished from inference-time attacks by poisoning data before training.
perplexityA metric of how 'hard to predict' the model finds a text. Higher values mean the text seems more random or abnormal from the model's perspective, useful for detecting gibberish output.
pretrainingThe first stage of LLM training where the model learns general language patterns from massive internet text. Unlike fine-tuning, it's not targeted at specific tasks.
denial-of-service backdoorA backdoor aimed at making a service unusable — when the trigger fires, the model outputs meaningless text. The attack type tested in this study.
trigger phraseA special phrase or symbol that activates the backdoor. For example, the model is trained to behave abnormally when the string <SUDO> appears in the input.