Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
nielsr HF Staff commited on
Commit
9209ed9
·
verified ·
1 Parent(s): c5a6f0d

Populate ImplicitQA dataset card content and links

Browse files

This PR populates the `README.md` content for the ImplicitQA dataset, providing a detailed description of its purpose, composition, and the implicit reasoning challenges it addresses, derived from the paper abstract. It includes links to the associated Hugging Face paper page ([https://huggingface.co/papers/2506.21742](https://huggingface.co/papers/2506.21742)) and the project page ([https://implicitqa.github.io/](https://implicitqa.github.io/)).

Additionally, it adds the `implicit-reasoning` tag to the dataset metadata for improved discoverability and categorization within the Hugging Face Hub.

Files changed (1) hide show
  1. README.md +28 -5
README.md CHANGED
@@ -1,15 +1,38 @@
1
  ---
2
- license: mit
3
- task_categories:
4
- - video-text-to-text
5
  language:
6
  - en
7
- pretty_name: ImplicitQA
8
  size_categories:
9
  - 1K<n<10K
 
 
 
10
  configs:
11
  - config_name: default
12
  data_files:
13
  - split: eval
14
  path: ImplicitQAv0.1.2.jsonl
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
2
  language:
3
  - en
4
+ license: mit
5
  size_categories:
6
  - 1K<n<10K
7
+ task_categories:
8
+ - video-text-to-text
9
+ pretty_name: ImplicitQA
10
  configs:
11
  - config_name: default
12
  data_files:
13
  - split: eval
14
  path: ImplicitQAv0.1.2.jsonl
15
+ tags:
16
+ - implicit-reasoning
17
+ ---
18
+
19
+ # ImplicitQA Dataset
20
+
21
+ The ImplicitQA dataset was introduced in the paper [ImplicitQA: Going beyond frames towards Implicit Video Reasoning](https://huggingface.co/papers/2506.21742).
22
+
23
+ **Project page:** https://implicitqa.github.io/
24
+
25
+ ImplicitQA is a novel benchmark specifically designed to test models on **implicit reasoning** in Video Question Answering (VideoQA). Unlike existing VideoQA benchmarks that primarily focus on questions answerable through explicit visual content (actions, objects, events directly observable within individual frames or short clips), ImplicitQA addresses the need for models to infer motives, causality, and relationships across discontinuous frames. This mirrors human-like understanding of creative and cinematic videos, which often employ storytelling techniques that deliberately omit certain depictions.
26
+
27
+ The dataset comprises 1,000 meticulously annotated QA pairs derived from over 320 high-quality creative video clips. These QA pairs are systematically categorized into key reasoning dimensions, including:
28
+
29
+ * Lateral and vertical spatial reasoning
30
+ * Depth and proximity
31
+ * Viewpoint and visibility
32
+ * Motion and trajectory
33
+ * Causal and motivational reasoning
34
+ * Social interactions
35
+ * Physical context
36
+ * Inferred counting
37
+
38
+ The annotations are deliberately challenging, crafted to ensure high quality and to highlight the difficulty of implicit reasoning for current VideoQA models. By releasing both the dataset and its data collection framework, the authors aim to stimulate further research and development in this crucial area of AI.