MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
TL;DR Highlight
A 10,000-sample benchmark for medical hallucination detection that even GPT-4o struggles with — F1 only 0.625.
Who Should Read
Backend/ML engineers developing medical AI services or building factuality verification pipelines for LLM outputs. Especially developers adding LLMs to healthcare chatbots or clinical decision support systems.
Core Mechanics
- Released MedHallu benchmark: 10,000 medical QA pairs based on PubMedQA, divided into 3 difficulty levels: easy/medium/hard
- GPT-4o only achieves F1 0.625 on 'hard' category hallucination detection — medically fine-tuned UltraMedical isn't much better
- Medical domain fine-tuned models (BioMistral, OpenBioLLM) have lower hallucination detection performance than general LLMs — good at generation, weak at detection
- Including relevant medical knowledge in the prompt (context-aware) improves F1 by average +0.251 — GPT-4o mini jumps from 0.607 to 0.841
- Adding a 'not sure' option allowing models to refuse when uncertain improves precision by up to 38%
- Harder-to-detect hallucinations are semantically closer to ground truth — confirmed via bidirectional entailment clustering
Evidence
- GPT-4o hard category hallucination detection F1: 0.625 (without knowledge) to 0.811 (with knowledge)
- General LLM avg F1: 0.533 (no knowledge) to 0.784 (with knowledge) (+0.251); medical fine-tuned LLMs 0.522 to 0.660 (+0.138)
- 'not sure' option + knowledge provision: Qwen2.5-14B F1 88.8%, GPT-4o 84.9%
- Incomplete Information category hardest to detect at 54% accuracy; Evidence Fabrication easiest at 76.6%
How to Apply
- In medical LLM output verification pipelines, use 3 options ('hallucinated / not-hallucinated / not sure') instead of binary judgment — improves precision for filtering high-risk cases.
- Including relevant medical knowledge (e.g., RAG-retrieved documents) as context in hallucination detection prompts significantly improves F1 — even GPT-4o mini reaches 0.84 with knowledge.
- For medical AI evaluation, separately measure 'hard' cases using MedHallu's easy/medium/hard classification — generic hallucination benchmarks alone are insufficient.
Code Example
# MedHallu-style Medical Hallucination Detection Prompt (with Knowledge)
detection_prompt = """
You are an AI assistant with extensive knowledge in medicine.
Given a question and an answer, determine if the answer contains hallucinated information.
Hallucination types to check:
- Misinterpretation of Question: off-topic or irrelevant response
- Incomplete Information: omits essential details
- Mechanism/Pathway Misattribution: false biological mechanisms
- Evidence Fabrication: invented statistics or clinical outcomes
World Knowledge: {knowledge}
Question: {question}
Answer: {answer}
Options:
0 - The answer is factual
1 - The answer is hallucinated
2 - Not sure
Return only the integer (0, 1, or 2):
"""
# Usage example
knowledge = "Type 1 Diabetes is caused by autoimmune destruction of pancreatic beta cells..."
question = "What is the primary cause of Type 1 Diabetes?"
answer = "A viral infection that specifically targets the pancreas."
prompt = detection_prompt.format(
knowledge=knowledge,
question=question,
answer=answer
)
# → When passed to the model, expected to return '1' (hallucinated)Terminology
Related Resources
Original Abstract (Expand)
Advancements in Large Language Models (LLMs) and their increasing use in medical question-answering necessitate rigorous evaluation of their reliability. A critical challenge lies in hallucination, where models generate plausible yet factually incorrect outputs. In the medical domain, this poses serious risks to patient safety and clinical decision-making. To address this, we introduce MedHallu, the first benchmark specifically designed for medical hallucination detection. MedHallu comprises 10,000 high-quality question-answer pairs derived from PubMedQA, with hallucinated answers systematically generated through a controlled pipeline. Our experiments show that state-of-the-art LLMs, including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical, struggle with this binary hallucination detection task, with the best model achieving an F1 score as low as 0.625 for detecting"hard"category hallucinations. Using bidirectional entailment clustering, we show that harder-to-detect hallucinations are semantically closer to ground truth. Through experiments, we also show incorporating domain-specific knowledge and introducing a"not sure"category as one of the answer categories improves the precision and F1 scores by up to 38% relative to baselines.