Datasets:

databenchSPA / README.md
jorses
bump
186b48b
|
raw
history blame
10 kB
metadata
language:
  - en
  - es
pretty_name: ' 💾🏋️💾 DataBench 💾🏋️💾'
tags:
  - table-question-answering
  - table
  - qa
license: mit
task_categories:
  - table-question-answering
  - question-answering
default: qa
configs:
  - config_name: qa
    data_files:
      - ./066_40db_Igualdad/qa.parquet
      - ./067_40dB_Dormir/qa.parquet
      - ./068_CIS_Enero_Marzo_2023/qa.parquet
      - ./069_CEA_Barometro_Andaluz_Septiembre_2023/qa.parquet
      - ./070_CIS_2023_Salud_Bienestar/qa.parquet
      - ./071_CIS_Politica_Fiscal_Julio_2023/qa.parquet
      - ./072_CIS_Relaciones_Afectivas_Pospandemia_III/qa.parquet
      - ./073_CIS_Barometro_Diciembre_2022/qa.parquet
      - ./074_40dB_Percepcion_Amor/qa.parquet
      - >-
        ./075_CIS_Salud_Mental_Pandemia_2021/qa.parquet     -
        ./066_Influencers/qa.parquet
      - ./066_Clustering/qa.parquet
      - ./066_RFM/qa.parquet

💾🏋️💾 DataBench 💾🏋️💾

This repository contains the original 65 datasets used for the paper Towards Quality Benchmarking in Question Answering over Tabular Data in Spanish which appeared in SEPLN 2024.

It is a spin-off of the original suite in English, which you can find here.

Spa-DataBench brings together ten tabular datasets from Spain’s major survey agencies: CIS, CEA, CRS, and 40dB. These datasets, initially available publicly, have been unified and enriched with our typing system (see Section 3.2) to simplify processing. Designed as an evaluation benchmark, each dataset is paired with twenty uniquely crafted questions along with their corresponding gold answers, totaling 200 Q&A pairs. This tuple-based structure (dataset, questions, answers) makes it easy to incorporate new datasets.

Usage

from datasets import load_dataset

# Load all QA pairs
all_qa = load_dataset("SINAI/databenchSPA")

You can use any of the individual integrated libraries to load the actual data where the answer is to be retrieved.

For example, using pandas in Python:

import pandas as pd

# "001_Forbes", the id of the dataset
ds_id = all_qa['dataset'][0] 

# full dataset
df = pd.read_parquet(f"hf://datasets/SINAI/databenchSPA/{ds_id}/all.parquet")

# sample dataset
df = pd.read_parquet(f"hf://datasets/SINAI/databenchSPA/{ds_id}/sample.parquet")
# Name Rows Columns #QA Source (Reference)
1 Encuesta de Igualidad 2000 105 20 40dB
2 Calidad del Sueño 2000 80 20 40dB
3 Fusión Barómetros 7430 161 20 CIS
4 Barómetro Andaluz 5349 85 20 CEA
5 Juventud 1510 236 20 CRS
6 Política Fiscal 3011 198 20 CIS
7 Relaciones 2491 186 20 CIS
8 Barómetro Mensual 2444 185 20 CIS
9 Percepción del Amor 2000 150 20 40dB
10 Salud Mental 3083 354 20 CIS
Total 31318 1741 200


## 🏗️ Folder structure
Each folder represents one dataset. You will find the following files within:

* all.parquet: the processed data, with each column tagged with our typing system, in [parquet](https://arrow.apache.org/docs/python/parquet.html).
* qa.parquet: contains the human-made set of questions, tagged by type, for the dataset (sample_answer indicates the answers for DataBench lite)
* info.yml: additional information about the dataset

## 🗂️ Column typing system
In an effort to map the stage for later analysis, we have categorized the columns by type. This information allows us to segment different kinds of data so that we can subsequently analyze the model's behavior on each column type separately. All parquet files have been casted to their smallest viable data type using the open source [Lector](https://github.com/graphext/lector) reader.

What this means is that in the data types we have more granular information that allows us to know if the column contains NaNs or not (following panda’s convention of Int vs int), as well as whether small numerical values contain negatives (Uint vs int) and their range. We also have dates with potential timezone information (although for now they’re all UTC), as well as information about categories’ cardinality coming from the arrow types.

In the table below you can see all the data types assigned to each column, as well as the number of columns for each type. The most common data types are numbers and categories with 1336 columns of the total of 1615 included in DataBench. These are followed by some other more rare types as urls, booleans, dates or lists of elements.

| Type           | Columns | Example                 |
| -------------- | ------- | ----------------------- |
| number         | 269   | 1                     |
| category       | 1464  | banana                |
| date           | 2     | 1979-01-01            |
| text           | 1     | A blue rabbit went to... |
| list[number]   | 1     | [10,11,12]            |
| list[category] | 4     | [banana, pineapple]   |

## 🔗 Reference

You can download the paper [here](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6617).

If you use this resource, please use the following reference:

@article{DBLP:journals/pdln/GrijalbaLCC24, author={Jorge Osés Grijalba and Luis Alfonso Ureña López and José Camacho-Collados and Eugenio Martínez Cámara}, title={Towards Quality Benchmarking in Question Answering over Tabular Data in Spanish}, year={2024}, cdate={1704067200000}, journal={Proces. del Leng. Natural}, volume={73}, pages={283-296}, url={http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6617} } ```