Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
RuchitRawal commited on
Commit
0cd4120
·
verified ·
1 Parent(s): b36cf65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -16,4 +16,50 @@ configs:
16
  data_files:
17
  - split: train
18
  path: data/train-*
 
 
 
 
 
 
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  data_files:
17
  - split: train
18
  path: data/train-*
19
+ pretty_name: argus
20
+ license: cc-by-nc-sa-4.0
21
+ task_categories:
22
+ - video-text-to-text
23
+ language:
24
+ - en
25
  ---
26
+
27
+ ## ARGUS: Hallucination and Omission Evaluation in Video-LLMs
28
+
29
+ ARGUS is a framework to calculate the degree of hallucination and omission in free-form video captions.
30
+
31
+ * **ArgusCost‑H** (or Hallucination-Cost) — degree of hallucinated content in the video-caption
32
+ * **ArgusCost‑O** (or Omission-Cost) — degree of omitted content in the video-caption
33
+
34
+ Lower values indicate better "performance".
35
+
36
+ If you have any comments or questions, reach out to: [Ruchit Rawal](https://ruchitrawal.github.io/)
37
+
38
+ Other links - [Website](https://ruchitrawal.github.io/argus/) &ensp; [Paper]() &ensp; [Code](https://github.com/JARVVVIS/argus)
39
+
40
+ ## Dataset Structure
41
+
42
+ Each row in the dataset consists of the name of the video-clip i.e. `clip_name` (dtype: str), and the corresponding `human_caption` (dtype: str). Download all the clips from [here]()
43
+
44
+ ### Loading the dataset
45
+ You can load the dataset easily using the Datasets library:
46
+ ```
47
+ from datasets import load_dataset
48
+ dataset = load_dataset("tomg-group-umd/argus")
49
+ ```
50
+
51
+ ### Cite us:
52
+
53
+ TODO
54
+
55
+ ### Acknowledgements
56
+
57
+ The clips are collected from three primary sources: First, we utilize existing video understanding datasets [1] that already contain captions. These videos are manually verified by human authors, and received well in the community.
58
+ Second, we incorporate text-to-video generation datasets [2,3], which include reference videos and short prompts. Since these prompts are insufficient for dense captioning, we manually annotate 10 such videos.
59
+ Lastly, the authors curate additional videos from publicly available sources, such as YouTube, under Creative Commons licenses. We curate 30 such videos, and also manually annotated , with cross-validation among the authors.
60
+
61
+ [1] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark
62
+
63
+ [2] TC-Bench: Benchmarking Temporal Compositionality in Text-to-Video and Image-to-Video Generation
64
+
65
+ [3] https://huggingface.co/datasets/finetrainers/cakeify-smol