The Dataset Viewer has been disabled on this dataset.

TiQuAD: Tigrinya Question Answering Dataset

TiQuAD is a human-annotated question-answering dataset for the Tigrinya language. The dataset contains 10.6K question-answer pairs (6.5K unique questions) spanning 572 paragraphs extracted from 290 news articles covering diverse topics including politics, sports, culture, and events.

Published with the paper: Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for Tigrinya (ACL 2023)

Related repositories:

Dataset Overview

TiQuAD follows the SQuAD format and is designed for extractive question answering in Tigrinya, a low-resource Semitic language primarily spoken in Eritrea and Ethiopia. The dataset features:

  • Human-annotated: All question-answer pairs are created by native Tigrinya speakers
  • Multiple annotations: Validation and test sets include up to 3 answer annotations per question
  • Diverse content: Questions span factual, inferential, and analytical reasoning types
  • Article-based partitioning: Train/validation/test splits are divided by source articles to prevent data leakage. An article could be split into multiple paragraphs when applicable.
Split Articles Paragraphs Questions Answers
Train 205 408 4,452 4,454
Validation 43 76 934 2,805
Test* 42 96 1,122 3,378
Total 290 572 6,508 10,637

*Test set is not publicly available (see section below)

Dataset Construction

The dataset was constructed through a careful multi-stage process:

  1. Article Collection: 290 news articles were gathered from Tigrinya news sources
  2. Paragraph Extraction: 572 informative paragraphs were selected and cleaned
  3. Question Generation: Native speakers created questions based on paragraph content
  4. Answer Annotation: Extractive answers were marked with character-level spans
  5. Multi-annotation: Validation and test samples received additional annotations for reliability

How to Load TiQuAD

from datasets import load_dataset

# Load the dataset
tiquad = load_dataset("fgaim/tiquad", trust_remote_code=True)
print(tiquad)
DatasetDict({
    train: Dataset({
        features: ['id', 'title', 'context', 'question', 'answers'],
        num_rows: 4452
    }),
    validation: Dataset({
        features: ['id', 'title', 'context', 'question', 'answers'],
        num_rows: 934
    })
})

Data Fields

  • id: Unique identifier for each question-answer pair
  • title: Title of the source news article
  • context: The paragraph containing the answer (in Tigrinya)
  • question: The question to be answered (in Tigrinya)
  • answers: Dictionary containing:
    • text: List of answer strings (multiple annotations for validation/test)
    • answer_start: List of positions where answers begin in the context

Sample Entry

{
    "id": "5dda7d3e-f76f-4500-a3af-07648a1afa51",
    "title": "ሃብቶም ክብረኣብ (ሞጀ)",
    "context": "ሃብቶም ክብረኣብ (ሞጀ)\nሞጀ ኣብ 80’ታትን ኣብ ፈለማ 90’ታትን ካብቶም ናይ ክለብ ኣልታሕሪር ንፉዓት ተኸላኸልቲ ነይሩ፣ ብድሕሪ’ዚ’ውን ኣብ ውድድራት ሓይልታት ምክልኻል ንንውሕ ዝበለ ዓመታት ከም ኣሰልጣኒ ክለብ በኒፈር ኮይኑ ዝነጥፍ ዘሎ ገዲም ተጻዋታይን ኣሰልጣንን’ዩ። ምሉእ ስሙ ሃብቶም ክብርኣብ (ሞጀ) እዩ። ሞጀ ብ1968 ኣብ ኣስመራ ተወሊዱ ዓብዩ። ንሱ ካብ ንኡስ ዕድሚኡ ብኩዕሶ ጨርቂ ጸወታ ጀሚሩ። ብድሕሪኡ ብደረጃ ምምሕዳር ኣብ ዝካየድ ዝነበረ ናይ ‘ቀበሌ’ ጸወታታት ምስ ጸሓይ በርቂ ምስ እትበሃል ጋንታ ተጻዊቱ። ኣብ 1987 ምስ ዳህላክ እትበሃል ዝነበረት ጋንታ ንሓደ ዓመት ድሕሪ ምጽዋቱ ከኣ ኣብ መወዳእታ ወርሒ 1987 ናብ ጋንታ ፖሊስ (ናይ ሎሚ ኣልታሕሪር) ብምጽንባር ክሳብ 1988 ተጻዊቱ። ምስታ ናይ ቅድሚ ናጽነት ጋንታ ፖሊስ ኣብ ዝተጻወተሉ ሰለስተ ዓመታት ከኣ ዝተፈላለየ ዓወታት ተጐናጺፉ ዋናጩ ከልዕል በቒዑ’ዩ። ድሕሪ ናጽነት ስም ክለቡ ኣልታሕሪር ምስ ተቐየረ፣ ሞጀ ናይታ ክለብ ተጻዋታይ ኮይኑ ውድድሩ ቀጺሉ። ኣብ መጀመርታ ናጽነት (1991) ኣብ ዝተኻየደ ናይ ፋልማይ ዋንጫ ስውኣት መን ዓተረ ውድድር ሞጀ ምስ ክለቡ ኣልታሕሪር ዋንጫ ከልዕል በቒዑ። ብዘይካ’ዚ ኣብ 1992 ብብሉጻት ተጻወትቲ ተተኽቲኻ ዝነበረት ኣልታሕሪር ናይ ፋልማይ ዋንጫ ናጽነት ከምኡ’ውን ሻምፕዮን ክትከውን ከላ ሞጀ ኣባል’ታ ጋንታ ነይሩ። ምስ ክለብ ኣልታሕሪር ፍቕርን ሕውነትን ዝመልኦ ምቁር ናይ ጸወታ ዘመን ከም ዘሕለፈ ዝጠቅስ ሞጀ፣ ምስ ኣልታሕሪር ናብ ከም ሱዳንን ኢትዮጵያን ዝኣመሰላ ሃገራት ብምጋሽ ኣህጉራዊ ጸወታታት’ውን ኣካይዱ’ዩ።",
    "question": "ኣልታሕሪር ናይ ቅድም ስማ እንታይ ኔሩ?",
    "answers": {
        "text": ["ፖሊስ", "ፖሊስ", "ጋንታ ፖሊስ"], 
        "answer_start": [414, 414, 410]
    }
}

Note: Samples in the validation set include up to three answer annotations by different annotators to enable human performance estimation and robust evaluation.

TiQuAD Test Set Access

To maintain evaluation integrity and avoid data contamination in models trained on web-crawled data, the test set of TiQuAD is not publicly available. This practice helps ensure fair and reliable evaluation of question-answering systems.

For Researchers

If you are a researcher interested in evaluating your models on the TiQuAD test set, please email [email protected] with the following information:

  • Subject: TiQuAD Test Set Request
  • Your full name
  • Affiliation (university or company)
  • Purpose and usage plan of request for TiQuAD test set
  • Acknowledgment that the dataset will be used for evaluation only, not for model training

We will review your request and share the test set directly with appropriate usage agreements. This controlled access helps maintain the dataset's value for fair benchmarking while supporting legitimate research.

Evaluation Protocol

For reproducible evaluation, we recommend:

  • Using exact match (EM) and token-level F1 score as primary metrics
  • Handling multiple reference answers appropriately (max score across references)
  • Reporting results on both validation and test sets when available
  • Or use the official TiQuAD evaluation script for consistency, which also handles normalization of common articles.

Baseline Results

The original paper reports the following baseline results on TiQuAD:

Model EM (Val) F1 (Val) EM (Test) F1 (Test)
mBERT 42.1 58.6 39.7 56.2
XLM-R 45.8 62.4 43.1 59.8

For detailed experimental setup and additional baselines, please refer to the original paper.

Citation

If you use this dataset in your work, please cite:

@inproceedings{gaim-etal-2023-tiquad,
    title = "Question-Answering in a Low-resourced Language: Benchmark Dataset and Models for {T}igrinya",
    author = "Fitsum Gaim and Wonsuk Yang and Hancheol Park and Jong C. Park",
    booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.acl-long.661",
    pages = "11857--11870",
}

Data Concerns and Corrections

We acknowledge that we do not own the copyright to the original news articles from which this dataset was derived. The articles were used under fair use principles for academic research purposes only.

If you identify any of the following issues with the dataset, please contact us:

  • Sensitive or inappropriate content that should be removed from the dataset
  • Annotation errors or inconsistencies in the question-answer pairs
  • Copyright concerns regarding the use of specific content
  • Factual inaccuracies in the source material or annotations

How to report issues:

We are committed to addressing legitimate concerns promptly and maintaining the quality and appropriateness of this resource for the research community.

Acknowledgments

We thank the native Tigrinya speakers who contributed to the annotation process and to the Eritrean Ministry of Information (https://shabait.com) and Hadas Ertra newspaper whose articles formed the basis of this dataset. This work contributes to advancing NLP research for low-resource African languages.

License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Creative Commons License

Downloads last month
0