MMTEB: Massive Multilingual Text Embedding Benchmark
TL;DR Highlight
The largest-ever embedding evaluation benchmark covering 500+ tasks and 250+ languages — and a 560M model beats 7B ones.
Who Should Read
ML engineers choosing embedding models for RAG pipelines or semantic search. Especially backend developers building multilingual services including Korean, Indian, and European low-resource languages.
Core Mechanics
- Expanded existing MTEB from 58 tasks to 500+, covering 250+ languages — largest embedding model evaluation ever
- Best performing public model is 560M multilingual-e5-large-instruct, not 7B LLMs — parameters do not equal multilingual performance
- GritLM-7B, e5-mistral-7b-instruct (Mistral-based 7B models) are strong in English but beaten by small XLM-R-based models in low-resource languages
- Instruction-tuned models consistently outperform non-instruction-tuned ones across all categories including bitext mining and clustering
- Developed task downsampling using only 2% of original documents while maintaining identical model rankings — dramatically reduced evaluation costs
- Proposed removing redundant tasks based on task correlation to compress benchmarks — enables custom benchmark creation
Evidence
- MTEB(eng, v2): reduced from 56 to 41 tasks but maintained Spearman correlation 0.90 (p<0.0001) with original
- Clustering evaluation 16.11x average speed improvement, model ranking Spearman correlation avg 0.96
- 7B model full benchmark evaluation completed in 3.11 hours on H100 GPU vs 2 days on A100 for original
- multilingual-e5-large-instruct scored avg 63.2 on 132 MTEB(Multilingual) tasks, ahead of GritLM-7B (60.9) and e5-mistral-7b-instruct (60.3)
How to Apply
- When selecting embedding models for RAG pipelines, check language-specific sub-benchmarks (MTEB(Multilingual), MTEB(Europe), etc.) on the HuggingFace MTEB leaderboard, not just English MTEB scores.
- For services including low-resource languages (Korean, Hindi, etc.), prioritize XLM-R-based models and multilingual pretraining data scale over parameter count — multilingual-e5-large-instruct is the practical default.
- When building custom domain evaluation benchmarks, use the mteb package's task_selection feature to automatically select a minimal task set based on task correlations.
Code Example
import mteb
from mteb.task_selection import results_to_dataframe
# Load existing results by selecting only specific language/domain tasks
tasks = mteb.get_tasks(
task_types=["Retrieval"],
languages=["kor", "eng"], # Korean + English
domains=["legal"] # Legal domain
)
model_names = [
"intfloat/multilingual-e5-large-instruct",
"intfloat/multilingual-e5-large",
"intfloat/multilingual-e5-base",
]
models = [mteb.get_model_meta(name) for name in model_names]
results = mteb.load_results(models=models, tasks=tasks)
df = results_to_dataframe(results)
print(df.sort_values("score", ascending=False))Terminology
Related Resources
Original Abstract (Expand)
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost.