Datasets:
AMEGA-LLM Benchmark
20 guideline-based clinical cases across 13 specialties with open-ended questions and a detailed rubric (1,337 criteria) to evaluate LLM medical reasoning and adherence to clinical guidelines.
Evaluation-only — do not use for training.
Files / configs
cases
– 20 case narratives + metadataquestions
– all questions per casesections
– rubric sections (with point weights)criteria
– fine-grained checklist items
Quick start
from datasets import load_dataset
# Load the cases table
cases = load_dataset("row56/amega-benchmark", "cases")
print(cases["train"][0])
# Load other tables
questions = load_dataset("row56/amega-benchmark", "questions")
sections = load_dataset("row56/amega-benchmark", "sections")
criteria = load_dataset("row56/amega-benchmark", "criteria")
Intended use & canary
This dataset is intended for benchmarking/evaluation of LLM clinical reasoning and guideline adherence. Do not fine-tune or train models on this dataset.
A unique canary marker is embedded in the repository to help detect misuse; please leave it intact and do not reproduce it in documentation or prompts.
Citation
Fast et al., npj Digital Medicine (2024).
@article{fast2024amega,
title={Autonomous Medical Evaluation for Guideline Adherence of LLMs},
journal={npj Digital Medicine},
year={2024}
}
Version
- v1.0 — initial release (Aug 10, 2025)
Links
- Paper (open access): https://doi.org/10.1038/s41746-024-01356-6
- Source code / issues: https://github.com/DATEXIS/AMEGA-benchmark
- Canary marker file:
CANARY.txt
- Downloads last month
- 16