grounding-benchmark / README.md
oran-qualifire's picture
Update README.md
b699dc6 verified
---
language: en
tags:
- claim-grounding
- natural-language-inference
- reasoning
- classification
- grounding
- hallucination
pretty_name: Grounding Claims Dataset
license: cc-by-nc-4.0
task_categories:
- text-classification
---
The **Grounding Claims Dataset** is a multi-domain dataset for evaluating whether a natural language **claim** is grounded (i.e., supported or entailed) by a **document**. The dataset is organized into four subsets, each requiring different types of reasoning:
- **general** (1500 examples): Broad, everyday reasoning
- **logical** (1000 examples): Logical consistency and inference
- **time_and_dates** (100 examples): Temporal reasoning
- **prices_and_math** (100 examples): Numerical and mathematical reasoning
Each entry consists of:
- `doc`: A short context or passage
- `claim`: A natural language statement to verify against the `doc`
- `label`: A binary label indicating whether the claim is grounded in the document (`1` for grounded, `0` for ungrounded)
- `dataset`: The source subset name (e.g., `"general"`)
---
## πŸ“Œ Features
| Feature | Type | Description |
|----------|---------|------------------------------------------------|
| `doc` | string | The document or passage providing the context |
| `claim` | string | A statement to verify against the document |
| `label` | string | grounded or ungrounded |
| `dataset`| string | The domain/subset the instance belongs to |
---
## πŸ“Š Usage
This dataset can be used to train and evaluate models on factual verification, natural language inference (NLI), and claim grounding tasks across multiple domains.
---
## 🏷️ Labels
- `grounded` β€” The claim is grounded in the document.
- `ungrounded` β€” The claim is ungrounded or contradicted by the document.