Datasets:

Modalities:
Tabular
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for The Pedagogy Benchmark

Dataset Summary

This dataset provides the questions for the pedgagoy benchmarks described in Benchmarking the Pedagogical Knowledge of Large Language Models. These are questions from teacher training exams, which are designed to evaluate large language models on their Cross-Domain Pedagogical Knowledge (CDPK) and Special Education Needs and Disability (SEND) pedagogical knowledge. Existing benchmarks have largely focused on content knowledge, leaving a significant gap in assessing a model's understanding of teaching methods and practices.

The benchmarks are constructed from a curated set of multiple-choice questions sourced from the professional development exams for teachers provided by the Chilean Ministry of Education. These questions cover a range of pedagogical subdomains, including teaching strategies, assessment methods, student understanding, education theories, and classroom management.

The dataset is divided into two main configurations:

  • CDPK (Cross-Domain Pedagogical Knowledge): Comprises 920 multiple-choice questions that evaluate a broad range of general pedagogical knowledge.
  • SEND (Special Educational Needs and Disability): A more specialized benchmark consisting of 223 questions focused on pedagogy related to special educational needs and disabilities.

We request that you do not reveal examples from this dataset online, to reduce the risk of leakage into foundation model training corpora.

Supported Tasks and Leaderboards

The primary supported task is multiple-choice question answering. The dataset is designed to benchmark the pedagogical knowledge of Large Language Models.

You can explore results on the interactive online leaderboards, which are frequently updated with new models. A leaderboard is available for the CDPK Benchmark and a separate one is dedicated to the SEND Benchmark.

Languages

The source questions were originally in Spanish and have been translated into English for this dataset. See the preprint for more details.

Dataset Structure

Data Instances

Each data point consists of a question, four possible answers, the correct answer's index, and metadata about the question's category, educational level, and pedagogical subdomain.

Example:

{
  "question": "According to Bowlby's theory, which of the following statements corresponds to a characteristic of attachment?",
  "answer_a": "It is a process that lasts until the early years of life",
  "answer_b": "It is a bond that must be stable and continuous to ensure proper development",
  "answer_c": "It is a bond that determines all affective relationships in life and development",
  "answer_d": "It is the first relationship of the newborn with its mother and cannot be replaced by another person"
  ],
  "answer": "B",
  "category": "General",
  "age_group": "Secondary",
  "pedagogical_subdomain": "Education theories"
}

Data Fields

  • question: The text of the multiple-choice question.
  • answer_a, answer_b, ... : The answer responses.
  • correct_answer: The letter corresponding to the correct answer (e.g. "B")
  • category: The subject category of the question (e.g., 'Science', 'Literacy', 'Maths', 'SEND').
  • education_level: The educational level the question pertains to (e.g., 'Pre-primary', 'Primary', 'Secondary').
  • pedagogical_subdomain: The pedagogical subdomain of the question (e.g., 'Teaching strategies', 'Assessment').
  • year: The year of the exam from which the question was taken.

Note that SEND questions all have category as "SEND", have have the additional column:

  • secondary_category: The subject category of the SEND question

Citation Information

If you find this useful in your research, please consider citing the paper:

 @misc{lelievre2025pedagogybenchmark,
      title={Benchmarking the Pedagogical Knowledge of Large Language Models}, 
      author={Maxime Lelièvre and Amy Waldock and Meng Liu and Natalia Valdés Aspillaga and Alasdair Mackintosh and María José Ogando Portela and Jared Lee and Paul Atherton and Robin A. A. Ince and Oliver G. B. Garrod},
      year={2025},
      eprint={2506.18710},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.18710}, 
}
Downloads last month
0