Evaluation-only access

Do NOT use this dataset for model training, pretraining, fine-tuning, or data augmentation.

By requesting access, you agree to all of the following:

Log in or Sign Up to review the conditions and access this dataset content.

AMEGA-LLM Benchmark

20 guideline-based clinical cases across 13 specialties with open-ended questions and a detailed rubric (1,337 criteria) to evaluate LLM medical reasoning and adherence to clinical guidelines.

Evaluation-only — do not use for training.

Files / configs

  • cases – 20 case narratives + metadata
  • questions – all questions per case
  • sections – rubric sections (with point weights)
  • criteria – fine-grained checklist items

Quick start

from datasets import load_dataset

# Load the cases table
cases = load_dataset("row56/amega-benchmark", "cases")
print(cases["train"][0])

# Load other tables
questions = load_dataset("row56/amega-benchmark", "questions")
sections = load_dataset("row56/amega-benchmark", "sections")
criteria  = load_dataset("row56/amega-benchmark", "criteria")

Intended use & canary

This dataset is intended for benchmarking/evaluation of LLM clinical reasoning and guideline adherence. Do not fine-tune or train models on this dataset.
A unique canary marker is embedded in the repository to help detect misuse; please leave it intact and do not reproduce it in documentation or prompts.

Citation

Fast et al., npj Digital Medicine (2024).

@article{fast2024amega,
  title={Autonomous Medical Evaluation for Guideline Adherence of LLMs},
  journal={npj Digital Medicine},
  year={2024}
}

Version

  • v1.0 — initial release (Aug 10, 2025)

Links

Downloads last month
16