Datasets:
metadata
language:
- en
license: mit
tags:
- tables
- benchmark
- qa
- llms
- document-understanding
- multimodal
pretty_name: Human Centric Tables Question Answering (HCTQA)
size_categories:
- 10K<n<100K
task_categories:
- question-answering
task_ids:
- document-question-answering
- visual-question-answering
annotations_creators:
- expert-generated
configs:
- config_name: default
data_files:
- split: train
path: train.parquet
- split: validation
path: val.parquet
- split: test
path: test.parquet
dataset_info:
- config_name: default
features:
- name: table_id
dtype: string
- name: table_csv_path
dtype: string
- name: table_image_url
dtype: string
- name: table_image_local_path
dtype: string
- name: table_csv_format
dtype: string
- name: table_properties
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: question_template
dtype: string
- name: question_properties
dtype: string
- name: answer
dtype: string
- name: prompt
dtype: string
- name: prompt_without_system
dtype: string
- name: dataset_type
dtype: string
description: >
Human Centric Tables Question Answering (HCTQA) is a benchmark designed
for evaluating the performance of LLMs on question answering over complex,
real-world and synthetic tables. This dataset contains both real-world and
synthetic tables with associated images, CSVs, and structured metadata.
The dataset includes questions with varying levels of complexity,
requiring models to handle reasoning across complex structures, numeric
aggregation, and context-dependent understanding. The `dataset_type`
field indicates whether a sample is from the real world data sources
(`realWorldHCTs`) or synthetically created (`syntheticHCTs`).
HCT-QA: Human-Centric Tables Question Answering
HCT-QA is a benchmark dataset designed to evaluate large language models (LLMs) on question answering over complex, human-centric tables (HCTs). These tables often appear in documents such as research papers, reports, and webpages and present significant challenges for traditional table QA due to their non-standard layouts and compositional structure.
The dataset includes:
- 2,188 real-world tables with 9,835 human-annotated QA pairs
- 4,679 synthetic tables with 67,500 programmatically generated QA pairs
- Logical and structural metadata for each table and question
π Paper: [Title TBD]
The associated paper is currently under review and will be linked here once published.
π Dataset Splits
Config | Split | # Examples (Placeholder) |
---|---|---|
RealWorld | Train | 7,500 |
RealWorld | Test | 2,335 |
Synthetic | Train | 55,000 |
Synthetic | Test | 12,500 |
π Leaderboard
Model Name | FT (Finetuned) | Recall | Precision |
---|---|---|---|
Model-A | True | 0.81 | 0.78 |
Model-B | False | 0.64 | 0.61 |
Model-C | True | 0.72 | 0.69 |
π If you're evaluating on this dataset, open a pull request to update the leaderboard.
Dataset Structure
Each entry in the dataset is a dictionary with the following structure:
Sample Entry
{
"table_id": "arxiv--1--1118",
"table_info": {
"table_csv_path": "../tables/csvs/arxiv--1--1118.csv",
"table_image_url": "https://hcsdtables.qcri.org/datasets/all_images/arxiv_1_1118.jpg",
"table_image_local_path": "../tables/images/arxiv--1--1118.jpg",
"table_properties": {
"Standard Relational Table": true,
"Row Nesting": false,
"Column Aggregation": false,
...
},
"table_formats": {
"csv": ",0,1,2\n0,Domain,Average Text Length,Aspects Identified\n1,Journalism,50,44\n..."
}
},
"questions": [
{
"question_id": "arxiv--1--1118--M0",
"question": "Report the Domain and the Average Text Length where the Aspects Identified equals 72",
"gt": "{Psychology | 86} || {Linguistics | 90}",
"question_properties": {
"Row Filter": true,
"Aggregation": false,
"Returned Columns": true,
...
}
}
...
]
}
Ground Truth Format
Explain the GT format here
Example: {value1 | value2} || {value3 | value4}
Table Properties
Property Name | Definition |
---|---|
Standard Relational Table | TBD |
Multi Level Column | TBD |
Balanced Multi Level Column | TBD |
Symmetric Multi Level Column | TBD |
Unbalanced Multi Level Column | TBD |
Asymmetric Multi Level Column | TBD |
Column Aggregation | TBD |
Global Column Aggregation | TBD |
Local Column-Group Aggregation | TBD |
Explicit Column Aggregation Terms | TBD |
Implicit Column Aggregation Terms | TBD |
Row Nesting | TBD |
Balanced Row Nesting | TBD |
Symmetric Row Nesting | TBD |
Unbalanced Row Nesting | TBD |
Asymmetric Row Nesting | TBD |
Row Aggregation | TBD |
Global Row Aggregation | TBD |
Local Row-Group Aggregation | TBD |
Explicit Row Aggregation Terms | TBD |
Implicit Row Aggregation Terms | TBD |
Split Header Cell | TBD |
Row Group Label | TBD |
Question Properties
Property Name | Definition |
---|---|
Row Filter | TBD |
Row Filter Condition Type Lookup | TBD |
Row Filter Condition Type Expression | TBD |
Row Filter Involved Columns Single | TBD |
Row Filter Involved Columns Multiple | TBD |
Row Filter Max Depth Of Involved Columns | TBD |
Row Filter Retained Rows Single | TBD |
Row Filter Retained Rows Multiple | TBD |
Row Filter Num Of Conditions | TBD |
Returned Columns | TBD |
Returned Columns Project On Plain | TBD |
Returned Columns Project On Expression | TBD |
Returned Columns Max Depth | TBD |
Returned Columns Expression In Table Present | TBD |
Returned Columns Expression In Table Not Present | TBD |
Returned Columns Num Of Output Columns | TBD |
Yes/No | TBD |
Aggregation | TBD |
Aggregation Type Sum | TBD |
Aggregation Type Avg | TBD |
Aggregation Grouping Global | TBD |
Aggregation Grouping Local | TBD |
Rank | TBD |
Rank Type | TBD |