Datasets:

Modalities:
Image
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
ClaimReview2024plus / README.md
MaggiR's picture
Update README.md
0e7cac1 verified
---
license: apache-2.0
task_categories:
- visual-question-answering
- zero-shot-classification
language:
- en
tags:
- fact-checking
- claim-verification
- multimodal
pretty_name: ClaimReview2024+
size_categories:
- n<1K
extra_gated_prompt: "**Terms of Use**: The dataset contains images that, by law, are protected by copyright. Therefore, the dataset **must not** be published to the broad public. Only researchers, educators, and students in the field of automated fact-checking may get access to this dataset—for **non-commercial** use only."
extra_gated_fields:
First name: text
Last name: text
Institutional email: text
Affiliation: text
Country: country
I want to use this dataset for:
type: select
options:
- Research
- Education
I agree to use this dataset for non-commercial use ONLY: checkbox
---
# ClaimReview2024+ Benchmark
[![Paper](https://img.shields.io/badge/ICML_Paper-EC6500?style=for-the-badge&logo=bookstack&logoColor=white)](https://arxiv.org/abs/2412.10510)&nbsp;&nbsp;&nbsp;[![License](https://img.shields.io/badge/License-Apache--2.0-F5A300?style=for-the-badge)](https://opensource.org/licenses/Apache-2.0)
This is the **ClaimReview2024+ (CR+)** benchmark, a dataset used to evaluate multimodal automated fact-checking systems. The task is to classify each claim as either `supported`, `refuted`, `misleading`, or `not enough information`. CR+ consists of 300 real-world claims sourced via the [ClaimReview](https://www.claimreviewproject.com/) markup from professional fact-checking articles. CR+ was specifically constructed to avoid the **data leakage** problem in which claims released prior to GPT-4o's knowledge cutoff in October 2023 are known to GPT-4o. Hence, CR+ only contains claims from fact-checking articles released starting Nov 1, 2023. Out of the 300 instances, 140 contain an image, the others are text only.
CR+ was constructed along with [DEFAME](https://github.com/multimodal-ai-lab/DEFAME), the current state-of-the-art multimodal fact-checking system and the first that can handle both multimodal claims and multimodal evidence. DEFAME achieved an **accuracy of 69.7%** on CR+.
For more details on CR+, check out the [ICML paper](https://arxiv.org/abs/2412.10510).
## Examples
<img src="preview.png" width="500">
## Cite this Work
Please use the following BibTeX to refer to the authors:
```bibtex
@inproceedings{braun2024defame,
title = {{DEFAME: Dynamic Evidence-based FAct-checking with Multimodal Experts}},
author = {Tobias Braun and Mark Rothermel and Marcus Rohrbach and Anna Rohrbach},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
url = {https://arxiv.org/abs/2412.10510},
}