Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 2,421 Bytes
98aeae9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0cd4120
 
 
 
 
 
98aeae9
0cd4120
 
 
 
 
 
 
 
 
 
 
 
3653110
0cd4120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
dataset_info:
  features:
  - name: clip_name
    dtype: string
  - name: human_caption
    dtype: string
  splits:
  - name: train
    num_bytes: 1544750
    num_examples: 500
  download_size: 806248
  dataset_size: 1544750
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
pretty_name: argus
license: cc-by-nc-sa-4.0
task_categories:
- video-text-to-text
language:
- en
---

## ARGUS: Hallucination and Omission Evaluation in Video-LLMs

ARGUS is a framework to calculate the degree of hallucination and omission in free-form video captions.

* **ArgusCost‑H** (or Hallucination-Cost) — degree of hallucinated content in the video-caption
* **ArgusCost‑O** (or Omission-Cost) — degree of omitted content in the video-caption

Lower values indicate better "performance".

If you have any comments or questions, reach out to: [Ruchit Rawal](https://ruchitrawal.github.io/)

Other links - [Website](https://ruchitrawal.github.io/argus/) &ensp; [Paper](https://arxiv.org/abs/2506.07371) &ensp; [Code](https://github.com/JARVVVIS/argus)

## Dataset Structure

Each row in the dataset consists of the name of the video-clip i.e. `clip_name` (dtype: str), and the corresponding `human_caption` (dtype: str). Download all the clips from [here]()

### Loading the dataset
You can load the dataset easily using the Datasets library:
```
from datasets import load_dataset
dataset = load_dataset("tomg-group-umd/argus")
```

### Cite us:

TODO

### Acknowledgements

The clips are collected from three primary sources: First, we utilize existing video understanding datasets [1] that already contain captions. These videos are manually verified by human authors, and received well in the community.
Second, we incorporate text-to-video generation datasets [2,3], which include reference videos and short prompts. Since these prompts are insufficient for dense captioning, we manually annotate 10 such videos.
Lastly, the authors curate additional videos from publicly available sources, such as YouTube, under Creative Commons licenses. We curate 30 such videos, and also manually annotated , with cross-validation among the authors.

[1] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark

[2] TC-Bench: Benchmarking Temporal Compositionality in Text-to-Video and Image-to-Video Generation

[3] https://huggingface.co/datasets/finetrainers/cakeify-smol